Hacker News new | past | comments | ask | show | jobs | submit | nuguy's comments login

I never made one in the first place. I still don’t understand why it wasn’t immediately obvious to everyone else that Facebook was toxic and a bad idea. I truly don’t understand how other people did or recognize that. It was blindingly obvious to me.


Fantastic


I take huge offense to this article. They claim that when it comes to AGI, Hinton and Hassabis “know what they are talking about.” Nothing could be further from the truth. These are people who have narrow expertise in one framework of AI. AGI does not yet exist so they are not experts in it, in how long it will be, or how it will work. A layman is just as qualified to speculate about AGI as these people so I find it to be infinitely frustrating when condescending journalists talk down to the concerned layman. This irritates me because AI is a death scentance for humanity — its an incredibly serious problem.

As I have stated before, AI is the end for us. To put it simply, AI brings the world into a highly unstable configuration where the only likely outcome is the relegation of humans and their way of life. This is because of the fundamental changes imposed on the economics of life by the existence of AI.

Many people say that automation leads to new jobs, not a loss of jobs. Automation has never encroached on the sacred territory of sentience. It is a totally different ball game. It is stupid to compare the automation of a traffic light to that of the brain itself. It is a new phenomenon completely and requires a new, from-the-ground-up assessment. Reaching for the cookie-cutter “automation creates new jobs” simply doesn’t cut it.

The fact of the matter is that even if most of the world is able to harness AI to benefit our current way of life, at least one country won’t. And the country that increases efficiency by displacing human input will win every encounter of every kind that it has with any other country. And the pattern of human displacement will ratchet forward uncontrollably, spreading across the whole face of the earth like a virus. And when humans are no longer necessary they will no longer exist. Not in the way they do now. It’s so important to remember that this is a watershed moment — humans have never dealt with anything like this.

AI could come about tomorrow. The core algorithm for intelligence is probably a lot simpler than is thought. The computing power needed to develop and run AI is probably much lower than it is thought to to be. Just because DNNs are not good at this does not mean that something else won’t come out of left field, either from neurological research or pure AI research.

And as I have said before, the only way to ensure that human life continues as we know it is for AI to be banned. For all research and inquires to be made illegal. Some point out that this is difficult to do but like I said, there is no other way. I implore everyone who reads this to become involved in popular efforts to address the problem of AI.


Stating something more times doesn't make it true. Everything you've written is pure speculation, and alarmist at that. There's no proof that AGI is even possible, and if it is possible there's no proof that it will end humanity.


Clearly, AGI-level intelligence is possible, because human brains exist.

So unless you pose that a function has to rely on its materialization (there is something untouchably magic about biological neural networks, and intelligence is not multiple realizable), it should be possible to functionally model intelligence. Nature shows the way.

AGI will likely obsolete humanity. Either depricate it, or consume it (make us part of the Borg collective). Heck, even a relatively dumb autonomous atom bomb or computer virus may be enough to wipe humanity from the face of the earth.


It's not at all clear that AGI is technically feasible. Human brains exist but we have only a shallow understanding of how they work.

Even if we assume for the sake of argument that AGI is possible, there's no scientific basis to assume that will make humanity obsolete. For all we know there could be fundamental limits on cognition. A hypothetical AGI might be no smarter than humans, or might be unable to leverage it's intelligence in ways that impact us.

Nuclear weapons and malware can cause damage but there's no conceivable scenario where they actually make us extinct.


Something can be possible, while still technically not feasible.

I agree our knowledge currently is lacking, but see no reasons why this will never catch up.

There are fundamental limits on cognition. For one our universe is limited by the amount of computing energy available. Plenty of problems can be fully solved, to where it does not matter if you are increasingly more intelligent (beyond a certain point, two AGI's will always draw at chess). Another limit is practical: the AGI needs to communicate with humans (if we manage to keep control of it), so it may need to dumb down so we can understand it.

Even an AGI as smart as the smartest human will greatly outrun us: it can duplicate and focus on many things in parallel. Then the improved bandwith between AGI's will do the rest (humans are stuck with letters and formulas and coffee breaks).

Manually deployed atom bombs and malware can already wreck us. No difference with autonomous (cyber)weapons.


Even anti-alarmists don’t ask for proof that AGI is possible. Obviously it is possible. Speculation is the best you get because nobody is going to be able to prove anything. We haven’t proven global warming is caused by humans but it’s still worth it to be proactive about greenhouse gasses. This is because when something is extremely dangerous, you don’t wait around for someone to finish proving it beyond any shadow of a doubt. You probably also think that god exists because nobody can prove otherwise?

And what does alarmist even mean? Do you call global warming advocates alarmists? It’s such an annoying, nonsense word that boils down to name-calling really. Discuss the merits of my actual argument. If you think my speculation is wrong, point out a flaw in the chain of logic that leads to my conclusion. Don’t just wave your hand and say that “you can’t prove it” like some evangelical christian talking about god or global warming. Seriously infuriating when there is so much at stake.


I ask for proof that AGI is possible. Show me a computer as smart as a lab mouse and then I'll take your concerns seriously.

The analogy to anthropomorphic global climate change is a non sequitur. Climatologists have created falsifiable theories which make testable predictions.

And you really have no clue about my personal religious beliefs. Calm down and take a seat.


I ask for proof that AGI is possible. Show me a computer as smart as a lab mouse and then I'll take your concerns seriously.

I would argue that, unless you can show why AGI is not - in principle - possible, that the null hypothesis would be that it is possible. Unless we veer off into some weird mysticism, it seems that the human brain turns energy and matter into intelligence somehow, operating according to the physical laws of the universe... why shouldn't it be possible to build something else that does the same?


You can't prove a negative. At this point we don't even know what the principles are.

If you're unwilling to provide me with a prototype equivalent to a rodent mind then I'll settle for a fully developed theory of human cognition. Let me know when you've got one. At least that would give researchers some guidelines to know whether they're making forward progress toward AGI.


I never said anything about your religious beliefs, only that your need for hard proof one way or the other is similar to the need displayed by people who want to believe in god and who dont want to believe in global warming. there are many such examples but the bottom line is that demanding me to show you a sentient computer right now is not reasonable. We have the ability to reason about things without developing formal, mathematic proof about it. And in this case, there is no possibility that anyone could ever prove that AI is possible or impossible. Nobody can prove what it will do. If accept nothing less than hard proof then you are opting out of survival in basically any situation whatsoever. Sometimes logical reasoning is all you have and this is a case of that.

Yeah and before the measurements were done, before enough time had elapsed for meaningful change to be measured, all there was was people like me screaming at people like you, trying to make you see. When ai comes you’ll have your proof but it will be too late.


No that's not how it works. Currently a belief in the inevitable arrival of AGI is a secular religious faith with no basis in hard science. We have plenty of real problems to worry about (like anthropomorphic global climate change) before we waste time making public policy to prevent something that may never happen anyway. Your concerns are silly, akin to worrying about an alien invasion when we don't even know if extraterrestrial life exists.


You are an outlier. Even Hassabis and Hinton believe that agi is inevitable. I’ve never met anyone who thought otherwise. The only disagreement is about the timescale and the result of its existence. If nothing I’ve said can convince you that ai is possible, then i would just ask you to consider that the human brain exists and that we will eventually figure out how it works.


You are correct, in that AI experts may not be the best predictors for AGI. For one, they spend their lives working towards the goal of AGI, so it would require a huge amount of cognitive dissonance for them to say that AGI is impossible or very very far on the horizon.

Philosophers and futurists are better suited to hypothesize an AGI timeline.

But you take it too far by saying it is anyone's game.

Game theory, security, and economic competition makes it impossible to globally ban AI. The incentives to automate the economy (compare AI revolution with industrial revolution) and to weaponize AI (Manhattan Project for intelligence) are just too big. We are already seeing that the US focus on fair and ethical AI puts them at a disadvantage against China and Russia. AGI must require pervasive surveillance of the populace, but the Luddites are holding this back.

I suggest you learn to stop worrying about the bomb, and start planning for its arrival.


There are non-doomsday possibilities for AGI. Imagine a super-intelligent AI that was built from the start to value humanity and our way of life, and from there it chooses to protect us and enable us all to do what we want more. (Of course, this could go in dystopian directions, but even those are better than extinction.) A super-intelligent AI that is built to value humans could decide to "uplift" humans' intelligence to be able to keep up with it in places that humans desire to keep up with it.

If we can figure out decision theory and how our values work, then when we figure out AI, we can hopefully build it to be aligned with our values from the start, instead of blindly hoping it happens to play nice with us instead of brushing us off like ants.


You don’t even begin to comprehend what I’m saying. You need to think about this more deeply.

So what if it is possible to create a benevolent ai? Nobody said this isn’t possible or even likely. We can also invent a machine that scrubs all the moss off of stones. Just because it’s possible for it to exist doesn’t mean it’s going to proliferate in the free-market of the world and everything in it. The only thing that is important is the fact that

1: we will enter an unstable configuration where any AI implementation that can exist will exist

2: the AI implementations that proliferate will be those that are not hamstrung by being forced to include humans in the loop

3: humans will be out of the loop for every conceivable task and therefore not enjoy the high standard of living that they do in 2018


I disagree with points 1 and 2 (and therefore 3). If a friendly AI is built and matures first, then it can protect us from unfriendly AIs trying to mature and take power. (Others call this idea a "singleton".)


That’s a really good point.


> the only way to ensure that human life continues as we know it is for AI to be banned

Is that because you think banned things do not happen? Even if the thing that is banned could confer a massive advantage to the entities developing it?

I think AGI is unlikely to be a thing in my lifetime, or even my children's. But if I were worried about it, I'd probably focus on developing a strategy to create a benevolent intelligence FIRST, rather than try to prevent everyone else from ever creating one via agreements and laws.


I appreciate that you actually suggest a solution. Nobody knows when AGI will come but it could come tomorrow. It could come in 1000 years. No harm in being proactive.

Developing a good ai first is useless because as I have said, the creation of ai enters us into an unstable configuration where bad ai will crop up regardless. Keeping bad ai from existing is infinitely easier when ai does not exist as a technology as opposed to when it’s a turnkey thing.


AI is likely simple and won't require much processing power after all, so it will be impossible to ban it because it would imply more regulation and surveillance than what would be sustainable. Also global warming will likely kill us anyhow. The rational conclusion is to enjoy our supermarkets and warm showers as long as they last. They will probably last longer if we deny these threats so as to avoid causing mass panic and nihilism.


We can survive global warming. We can fix it. We can come back from it. We will never come back from ai. It’s not impossible to ban ai. And we would be stupid to assume it’s impossible instead of trying to find out through an effort to save ourselves.


"These are people who have narrow expertise in one framework of AI." Proof that you don't know who you are insulting


I don’t remember insulting anyone. And how is that not true?


Geoff Hinton is the grandfather of deep learning. Virtually all the modern advancements in AI can be traced back to him and his lab.

What is your track record in AI? It sounds like you have no technical knowledge of AI. For example do you understand the concept of cross entropy loss?


Yes, the grandfather of deep learning. This is exactly my point. All the modern advancements in ai have nothing to do with agi.

And by the way I happen to know about both of the subjects of the article.


[flagged]


the answers to your questions are

1: I never said recent advancements are directly leading to AGI. Not in ML.

2: I don’t hold any contradictory ideas in my head

Your comment is aggressive and unpleasant which is an offense that should get you flagged. I constantly get flagged for making comments like yours because I happen to have an unpopular opinion. I can’t believe I put up with all this for YOUR benefit. Do you think I derive pleasure from trying to make people like you see things clearly?

As I have said so many fucking times, a layman is just as qualified as a ML expert to talk about the impact of AI on the world. Just because someone is an expert in a field that is tangentially related to AGI doesn’t mean a god damn thing. This is not a discussion about modern ML. But just to make it super easy for you to understand, let me put it this way: even if someone here were an expert in every detail of the theory and practical aspects of implementing an AGI, that person still doesn’t know any more than a layman about the consequences of AI. The point that you so annoyingly cling to is like a car mechanic thinking he was the ultimate authority on how cars would impact the world. You don’t have to be a car mechanic to understand and reason about the concept of transportation. Ultimately, the most qualified person to talk about that is an economist or someone. Not you. You don’t have a deeper understanding of the concept of AGI than literally anyone.

It is to the benefit of humanity that you observe the deep, fundamental changes that AGI will cause in the basic economics of human life. Dismissing it as “too far off” or “alarmist nonsense” is irresponsible.


So, according to you, ML is only tangentially related to AGI. Therefore we should listen to you, not ML experts - because you are a layman.

Even if I accept your absurd logic quoted above - how do you explain your contradictory goal of stopping all ML research. All the top AI experts are conducting ML research, which according to you is only tangentially related to AGI.

Going further, no AI researcher has managed to build even something as smart as a rat.

I, therefore conclude that the human race is at a greater danger of being out competed by evolution and chance mutations of chimpanzees and dolphins. These are our real competitors and next position leaders in IQ. We should focus on banning and eliminating chimpanzees and dolphins instead of foolishly protecting them. Why waste time blocking ML research which is only tangentially related to AGI. Let's take the war to www.reddit.com/r/dolphins and www.reddit.com/r/chimpanzees .

No point wasting time on hacker news.

> I can’t believe I put up with all this for YOUR benefit.

Thanks for looking out for my benefit. I will reciprocate by fighting the chimpanzees for YOUR benefit.


Everything in this comment is wrong. Forgive me for not addressing all of it.

The point about the layman is that the actual substance of my argument should be considered rather than my credentials. You think that your knowledge of ML (credentials) gives you the authority to win a debate without actually debating.

I have never called for a ban on the specific research that is currently ongoing in ML. I have called for a ban on all AI research — not because it’s easy or makes a lot of sense but because it seems to be the only solution. I am receptive to new solutions, the absence of which is quite conspicuous in your comments. You are stuck on credentials and nit-picking.

“So according to you, ML is tangential so therefore listen to you”

I literally spelled this out for you in my comment. Are you blind? The fact that ML is not a direct path to AGI is just an asside. Perhaps I should have focused exclusively on your main error so as to not confuse you. Like I said, the impact of AI on the world is an economics question in essence. You don’t need to know anything about how an AGI might work to reason about the economic, strategic, and existential changes that AI as a concept will bring about. It is absolutely true that no amount of knowledge about ML or even AGI will help in any way with that line of inquiry.

“We haven’t made robot rats yet”

This just a permutation of people saying AGI is far off. You don’t appear to be in the camp that thinks AGI is impossible. Therefore this comment is irrelevant because it will come at some point and my argument is primarily about what that will look like, not when it will happen.

It should, by now, be thoroughly clear to anyone who stumbles upon this thread in the future that I am correct. If you want to continue, you can contact me at brian.pat.mahoney - gmail.com


For what it's worth, I considered myself pretty undecided about this issue before, but the tone and content of your comments have moved me significantly against your argument.


* AI to be banned*

Good luck with that.


So sarcasm is what will save us all?


from people like you :)


So ai is not a problem in your eyes?


no


Where in the logic of my argument have I made a mistake?


I think we should invent a new language for humanity. Ive tried to design one a little bit.

There are tons of ways to describe the sounds that a human mouth can make, and you can find very detailed nomenclature about it but generally it is useful to think of a spoken language as consisting of a set of phonemes, which are fundamental sounds. I looked at a list of the most common phonemes and just decided to use the top 20 in order to make the pronunciation and recognition of all the sounds of this new language as easy as possible. Between all the languages of the world, there is considerably more overlap of some sounds than others, so I believe it’s useful to use the most overlapped ones.

Most languages consist of open phonemes and closed phonemes. Most languages include words that end on a closed sound like the word “stop.” Notice that actually saying this word without making an open sound and the closed “p” sound is impossible. In reality the word sounds like “stop-ah” because of the physics of the mouth. In this new language, all words start with a closed sound and end with an open sound, for example “gah” “dah” “dee” “nee” and so on.

If you take a bunch of common closed sounds and open sounds, and have none that sound similar to the others in any way (distinctive sounds are necessary for easy learning and understanding by learners because after a certain age, discriminating between similar sounding sounds is impossible.) then you have a list that can create around 40 or 400 “base combinations” of one closed sound and one open sound, like the examples above. I can’t remeber the exact number but I have it all written out somewhere. Limiting words to 20 or 30 combinations of these base combinations, you have a word space that is larger than a human can memorize probably, even after cutting out all the bad combinations like “dee-dee-dee-dee-dee.” So in the end you have words that might look like “nah-kah-too-nigh.” You give up compactness compared to English but gain advantages in other areas. Namely clarity and truer correlation between how a word is spelled and pronounced which is key for learning a new language. Also it’s impossible to have something like two words that sound the same like “two” and “too.”

The language would cross of several base combinations as I mentioned earlier. These singletons “dee” “dah” “nee” and so on, would not have any meaning. The next level up, two of these combinations together like “dee-nee” or “too-goo” would have meanings that are fundamental and common and would act as root words to form other higher level words just like in English with Latin and Greek. So for example the root word “dee-soo” means to finish, to terminate, to end. There is a joke in there by the way. And the root word “vigh-tah” might mean life, animation or prosperity. So the word “dee-soo-vigh-tah” might mean death. And so on. I think the root word system is very good in English and doing it very deliberately in a new language would be good.

Often in English we make up words with other words. Or we start to use two words, a phrase, as it’s own word. This would be formalized into this new language with vocal and written markers to insert in between individual words that when spliced together form a good word for some new thing. This might act both as a staging tool for new concepts before they are formally introduced into the dictionary but also a better way of improvising combinations on the spot to meet strange or funny one-off combinations.

Unlike thai and a few other languages, written words have delimiters (!) and unlike English, it isn’t an empty space that can easily be interpreted incorrectly.

The alphabet would consist of very carefully designed letters, each one representing one of the aforementioned base sounds. Each letter would be designed so as to be very difficult to write or read in a way that causes it to be mistaken for another letter. In English we cross our zeros to not confuse them with The letter O. This language would effectively do that in advance for all letters.

Still haven’t thought of what conjugation method would be best. Or other higher level stuff.


I wanted to do this but with Blu-ray’s. My goal was to build a library of every movie ever made that was worth seeing. It seemed like 2018 was a good time to do this since good movies have apparently stopped being made. Anyway, If you wanted to watch tons of good movies, you would normally end up paying tons of money to rent it from iTunes. And even then you only get to see it once and you have to have an internet connection. And streaming services don’t have even a fraction of the selection needed. But Netflix’s mail dvd service seems to have every movie I can think of. So why not open a few Netflix accounts, order disks in the mail and just save all the disk images? It seemed like a good idea until I looked into Blu-ray copy protection. Of course, I wanted to have my library consist of only the highest quality and highest fidelity so Blu-ray’s were called for. But Blu-ray copy protection is devious, ingenious and very effective. Each disk consists of two regions: A region that holds encrypted movie data and a region that holds a key. It is illegal for players to be sold that read the key and then forward it to user-facing interface like a computer. Players must always only read the key only in order to use it internally to decrypt the movie data. This stops all legitimate entities from selling players that reveal the key to the user. But what about the illegitimate entities that might want to sell modified players that provide the key? Or just publish keys online? Well, the key on the disk is itself actually encrypted. And it is encrypted in such a way that multiple keys can decrypt it. Blu-ray players come with special hardware that is flashed with a key at the factory. This hardware uses that key to decrypt the Blu-ray’s key. In the event that a key is compromised and published online, or used widely in any way, that key is depreciated and all Blu-ray’s from that point onward contain keys that cannot be decrypted with the compromised hardware key. Instead, a newer key is used. This new key is still able to decrypt all the old Blu-ray keys as well as all the new ones. This effectively defeats people publishing keys online. It’s ingenious in that the people who conceived it realized that the only time key compromise is a problem is when those keys are disseminated widely, and that when keys are disseminated widely they are easy for authorities to detect. If you want to get perfect rips of any blu-Ray you might come across, you are forced to go through the pain of probing the hardware yourself to get that key, which is quite difficult. There’s no way around it.


I’ve ripped every Blu-Ray I’ve purchased easily with MakeMKV, probably a couple hundred. I dunno, but someone is making it seem easy....


Like I said, makemkv (which is indeed the idiomatic tool for the job) will never be a reliable way to rip any bluray. For my purposes, it was very important to be able to rip absolutely anything I came across. If you’ve got some Blu-ray’s and want to try it out then fine. It will probably work if they are older movies and were “pressed” long ago. The Blu-ray copy protection scheme also has the quality of making legitimate players obsolete if they were the source of stolen keys. So even a legitimate older player might not be able to read a new Blu-ray.


> The Blu-ray copy protection scheme also has the quality of making legitimate players obsolete if they were the source of stolen keys. So even a legitimate older player might not be able to read a new Blu-ray.

This is the reason why I never bought a single player or disk, Blu-ray and HDMI/HDCP copy protection went way to far (especially with the chain of custody nonsense). At the end, if the industry wants to fuck legitimate customers like that, fine, I just torrent everything, this way only a single person/group has to figure out how to rip the disk or broadcast, just once. There is no reason what so ever to feel bad about it, they did it to themselves, no informed customer should ever buy into that shit. Its self defense, I'm not buying a TV for thousands for the copy protection to brick it eventually.


MakeMKV seems to have processing keys dating from late 2018 (at time of writing) so, ignoring BD+, it should be able to decrypt every disc pressed before then.


You may be right, but that doesn’t reflect my experience. Most the BR’s I rip are just barely released (pressed) and I have a 100% success rate with makemkv. They all literally worked on the first try, even new releases. Maybe I’m just lucky.

The only problem I’ve had is when they try to make it hard by putting a gazillion titles on the disk to make it hard for you to figure out which is the right one. That sucks, but is not insurmountable.


This has been my experience too. I've never had makemkv fail to open something. I suppose that if the dude who runs it ever gets hit by a bus or something, then it might not be useful going forward from then, but presumably somebody else would take up the work.


As long as you dump the encrypted key, you can reliably trust that there will become a way to decrypt it later.

> In the event that a key is compromised and published online, or used widely in any way, that key is depreciated and all Blu-ray’s from that point onward contain keys that cannot be decrypted with the compromised hardware key.

If someone merely provides a title key decryption API, is there any way to figure out which device key they're using?


Wow I had not actually thought of that. Hosting that service would cost money, unlike releasing keys on pastebin, and any attempt to do something like this, especially if monetized, would meet considerable retaliation from Blu-ray people. So I guess that’s why you don’t see it.

Getting keys from your hardware is a hassle and I didn’t want to wait for a decryption solution later.


Would that service really take more than a raspberry pi on tor?


Hmm... web/bittorrent over tor, with update file references. Just have the swarm on a given directory, update to the latest "full" swarm every day/release with the same directory for the keys available.


Yeah (DoS factor).


Just hold on to the discs you can't rip. Eventually enough keys will be leaked to decrypt them. Blurays will likely die out when internet is fast enough to stream high quality video and at that point you will be able to rip them all when a key is leaked.


Internet is already fast enough to stream high quality movies. The reason you can only access a comprehensive list of movies thru iTunes or disks is legal, not technological.


Not at BR quality. Well, it may be fast enough in sections, but it isn't economical to send out thousands of BR quality streams, so they don't.

Newer codecs help with that, yes. However, streaming will not be as high quality as a "station wagon full of tapes" i.e. local, in the near future.


Easily at BR quality with a high speed connection — it's only 20-30 mbps — but as you say, consumers don't care enough to pay what it would cost to stream at higher bitrates.


Really? Most people still use internet slower than 10 Mb/s and treat a computer as an enemy that works against them.


Source? I don’t know if any isp that still offers 10mbit service.


Come out to Western Montana. CenturyLink would only offer 10mbps/1mbps, of which I was lucky to get about half. They let me upgrade last month to 25/2, of which, I am lucky if a speed test results in 10/1.5. I'll take it as it is quite literally the best I can get. I recently paid Spectrum around $5k to extend a line to my house, but I'll have to wait until spring for it and its 400/25 (I am counting the days!).


Haha, come to Australia. That's faster than the connection my whole office at work shares. I don't know anyone who could stream a bluray which is usually about 40gb for an hour of video.


https://o2.cz

My top speed is 8 Mb/s. This is the most prominent provider in my country. My plan is 20 Mb/s, but that has literally never happened. A lot of people in my country use wifi providers with outdated equipment, giving them around 10 Mb/s as well. And lastly, check the page on average speeds of the Internet on Wikipedia.

BTW the absolute majority of customers are sharing their bandwidth and the infrastructure is built with that in mind. If everyone streamed 4K videos, the actual speed would go down massively - of course since the only solution would be to rebuild the infrastructure, that is not going to happen, instead they will lower the speed or introduce FUP.

You can see Czech Republic pretty high in that list - that's because it's average speed, not median. I have a friend that has a 600 Mb/s plan that's 3 times cheaper than mine, but that's not the norm - and I guess it's similar all around the world, either you're lucky... or not.


I am half hour from a major metro, I would gladly pay another 80/mo to have 1.5m on a semi reliable basis. You take a lot for granted. Our infrastructure sucks ouside the suburban bubble.


Wow I'd forgotten how lucky I am right now--that's what I pay for gigabit fiber. Thanks for the perspective.


Pretty standard in the UK outside of major towns/cities.


Not that uncommon outside of major towns/cities either - I was getting barely 12Mbps on ADSL in inner London, and that only 600m from an exchange.


ADSL speeds aren't great, but fibre is available almost everywhere in cities in the UK...


True. Although even on fibre, I only get 25Mbps peak. Often it's hovering around 18-20Mbps (which is not quite enough for 4K streaming, IIRC.)


Of course that's fake fibre aka fibre to the green cabinet in your street. The UK allows that to be marketed as fibre despite the large difference in performance to a real fibre connection.


Having since upgraded to real fibre (I saw the man splice it into the router connector with my own eyes), the difference is staggering - now I regularly get 500Mbps+ up and down with not infrequent 800Mbps+.


And even that is surely not just for you, but shared.


Why not just read the unencrypted data that the Blu-ray players give you, though.


Like, ripping the HDMI output? You're limited to realtime speed for that, which is going to make the process take ten times longer.


Plus, critiacally, that signal is the result of the player messing around with the data. It might do all kinds of stupid processing. You don’t know what quality of decoder they’ve put in the player. I guess there are ways to find out so It’s not a dealbreaker but it’s just another thing that complicates stuff. But overall I’d say that finding a player with really good decoding and capturing the resulting signal is your best option for the scenario I outlined. It wouldn’t make a difference if it took 2 minutes or two hours because the window between getting a movie and mailing it back is way more than two hours.


> But overall I’d say that finding a player with really good decoding and capturing the resulting signal is your best option for the scenario I outlined.

Exactly.


Also HDCP becomes an issue.


You can get HDMI splitters on Amazon that will get rid of HDCP.


I’ve ripped a few of my own BR discs, hasn’t been a insurmountable problem.


You make it sound like you cracked the disk encryption or lifted keys off your player. Or did you just try out keys you found online?


Nope, just used Handbrake. It uses whatever library that handles BR decoding and key management. I believe sometimes a key hasn't yet been leaked on newer discs, however if you wait a few months it will have.


Well, I never thought I would change my position on VR but here we are. If this works it will make VR viable.

It’s so disappointing to remember the hype in 2012 and then look at VR in 2018 only to see an anemic library of titles, little adoption, and not much improvement of the actual headsets. It’s pathetic.

I’ve heard of some headsets in the pipes that are supposed to actually be good. Extremely high resolution and full field of view coverage are absolutely required. There is no technological limit that is stopping a headset like that from existing, and yet it doesn’t. Pimax maybe. Am I missing one?

If you control motion sickness and have a headset like I described above, then you have a minimum viable system. Going beyond that, we need non-discreet light fields and perfect body motion capturing.


VR ‘cured’ my motion sickness. I worked on a VR project in 2017. Before that I'd get carsick unless I was driving. At the start any artificial motion in VR made me queasy, but I got used to it. Now I'm fine.


> Extremely high resolution and full field of view coverage are absolutely required. There is no technological limit that is stopping a headset like that from existing, and yet it doesn’t.

GPUs are the technological limit.

The Vive and Rift push 2160 x 1200 pixels at 90Hz. 90Hz is the bare minimum speed for VR. Nvidia's new RTX 2080 Ti (retail $1,200) cannot maintain 90Hz at 3840x2160 in most games. Two in SLI probably could, but that is much more expensive than the original Vive/Rift CV1 GPU requirements were. We are several generations of GPUs away from being able to double the VR resolution while maintaining 90Hz for less than $600 worth of hardware.


We're getting very close to high quality and inexpensive eye-tracking technology that will make it cheap and easy to implement foveated rendering in VR headsets (basically, only rendering high quality where your eye is looking in a completely unnoticeable manner). That will really drop the hardware requirements and make high field of view/high resolution headsets totally doable.

It's an exciting time for VR.


You are, of course, expecting a certain level of detail in games to have this kind of limitation. I would expect a game like Rez (PS2) to be easily runnable in super high resolutions. Obviously games like RE7 and Fallout 4 wouldn't run without significant changes and engine optimisations, but I could live with simpler looking games.


Just because a headset has high resolution and therefore complete fov coverage and no screen door, does not mean that it has to be utilized. If someone has a weak computer they can render a lower resolution and smaller area and scale it. Every headset should be capable of high resolution because anything less than that is not compelling vr. Multiple gpus is fine.


I agree that in general it is a very important improvement for VR to allow motion-sensitive people to enjoy it. However I do not agree this by itself will make VR much more viable than it is now. Motion sickness is just one problem, but without addressing others (high price, low resolution, narrow FOV, bulky headset), it will not get much more viable.

And I say it as someone who considers current state of VR to be amazing and enjoys it daily. But personally for me relatively low resolution would be #1 priority to improve.


I've read a research study showing that better resolution doesn't actually help.


No, not for nausea. I’m talking about just having a good vr experience.


This website really should be free of all discussion of those matters. Unfortunately only one half of the people who make political remarks are silenced.


Case in point


I find it very strange that I’ve never seen a single article like this about SpaceX. Doesn’t it stand to reason that if musk runs both companies, there would also be a juicy scoop about SpaceX employees and their hardship? I suppose it has nothing to do with the fact that Tesla is publicly traded and SpaceX is not.

Elon musk was totally rational during the Thai submarine thing. He posted a series of very logical and straightforward suggestions, was then asked by the divers to implement the submarine, and was consequently called a fraud by some random person who had associated himself with the rescue. It was quite rude. Elon responses in kind and in my opinion he was very measured in his reply considering he had just received a very unfair and mean molestation of his character. But all this wired article says is that he called someone a pedo, as if he had lost his mind. A lack of essential context is conspicuously absent.


This is brilliant.


Exactly. Living your life with zero buffer against fluctuations in the availability of life-critical materials is just as stupid as people who are prepping for zombie invasions. One guy thinks nothing will ever happen and the other thinks society is a transient entity. They are both fantasies. If you aren’t ready for earthquakes, fires, financial disasters and etc then you have a short attention span or you’re just lazy.

Besides all this, consolidation and integration of life-support is the way of the future. If you look at society as one big system, it makes a lot of sense to have lots of redundant units rather than all units depending on one central resource. Sole and batteries for example harden the whole country against equipment failure, terror attack, negligence and natural disasters. With a single power plant any instance of one of those things could bring down huge numbers of houses. Technology is making it possible to produce some of the things you need in home, so there is a funny convergence of preppers and technologists.


No idea why you've been d/ved, as you say, there's so many potential, but low risk, scenarios. Even in America and the EU we've all got a 0.5%, maybe 0.1% every year that something could go very wrong. Most likely is that it doesn't spiral, like the 2008 financial crisis, but there was a real, but small, chance that could have gone much worse.

His comment not only is ignoring real-life examples that have happened within his life time (Afghanistan, Iraq, Syria, Venezula, Katrina, Puerto Rico, etc.), he's also postulating that these new governmental structures would magically spring up over night, rather than taking years or even decades to emerge.

I'm no prepper, but I do feel that I am knowingly taking a risk, albeit a very small one, by not being one. If I had a family to care for, I would definitely be more prepared with 3 months food and some emergency medical supplies beyond a first aid kit. It's a small enough risk that it will probably not happen for my generation or my country, but it does happen.

I also often don't buy insurance for things I think aren't worth it, but it's still a deliberate choice rather than sleep walking into it.

It's almost on the same risk percentage as Home Insurance, so if you're insuring your home, why aren't you 'spending' a little time each year prepping? Just because it's socially unacceptable, but home insurance is regarded as socially acceptable?


Doesn't even have to be something destructive like Iraq or Puerto Rico, Argentina comes to mind: serious economic collapse, maybe not Venezula-style, but enough that it disrupted things heavily.

I agree with the parent -- it's just insurance, and is a cost-benefit trade-off that needs to be evaluated in a similar fashion as flood insurance or the like.

There are a lot of what I'd call "psychological" or perception-based factors for doing things like hoarding guns and food, and it's easy to go overboard, but they're not fundamentally poor choices.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: