Hacker News new | past | comments | ask | show | jobs | submit | riotman's comments login

Unless it's invertible.


Hash functions can't be invertible due to pigeonhole principle, as there are more possible inputs than possible outputs for a fixed-size hash.


> "It's no different than consoles" is a pretty strong statement to make unsupported. There are many differences.

Like what? ps5/xbox series x/ps4/xbox one are very sophisticated, comparable to a modern desktop PC, yet it's totally closed off. Consoles no longer resemble embedded devices like previous gens. Heck, even the original nes used the MOS 6502 chip, which was very popular in PCs in the early days.


Why did you use neural networks? There are faster techniques in analytical geometry that can extract surface contours from color gradients from images, and they do this faster and directly.


1) My bread and butter for the last 10 years has been machine learning. When all you have is a hammer...

2) We don't extract surface contours, we learn a volumetric radiance field! To oversimplify, we learn a (smooth) function that, given a position in space, produces the differential opacity and color at that space. To render an image from a camera viewpoint, we approximately integrate along rays emitted from each pixel of the camera.

Check out NeRF and our paper to learn more about this representation!


Neural networks are better compared to classical methods.

One of the best non-classical methods is this one (https://grail.cs.washington.edu/projects/sq_rome_g1/), and our method is significantly improves upon it. We do not compare directly with it, but Neural Rerendering in the Wild does, and we improve upon it.


these nerf models are like 5MB large are have a ton of directional lighting support. speculars, caustics, refraction, mirrors, you name it!

also they're way higher quality than traditional techniques


> If the mechanism of action is well understood

In OP's paper, I'm not sure if they met this criteria.


"People who say AGI is a hundred years away also said GO was 50 years away" this is not true. The major skeptics never said this. The point skeptics were making was that benchmarks for chess (IBM), Jeopardy!(IBM), GO (Google), Dota 2 (OpenAI) and all the rest are poor benchmarks for AI. IBM Watson beat the best human at Jeopardy! a decade ago, yet NLP is trash, and Watson failed to provide commercial value (probably because it sucks). I'm unimpressed by GLT-3, to me nothing fundamentally new was accomplished, they just brute forced on a bigger computer. I expect this go to the same way as IBM Watson.


One expert predicted in mid-2014 [1] that a world-class Go AI was 10 years away. AlphaGo defeated Lee Sedol 18 months later.

It's not 50 years, but it does illustrate just how fraught these predictions can be and how quickly the state of the art can advance beyond even an insider's well-calibrated expectations.

(To his credit the expert here immediately followed up his prediction with, "But I do not like to make predictions.")

[1] https://www.wired.com/2014/05/the-world-of-computer-go/


People also predicted 2000 would have flying cars. The moral of the story is future prediction is very difficult and often inaccurate for things we are not close to achieving. Not that they always come sooner than predicted.


We have flying cars. What we don't have is a flying car that is ready for mass adoption. The biggest problem is high cost both for the car and its energy requirements, followed by safety and the huge air traffic control problem they would create.


As a counterpoint I felt like when alphago came out I was surprised it took so long, because go really seems like a good use case for machine learning supremacy because 1) the go board looks particularly amenable to convey analysis and 2) it's abstract enough for humans to have missed critical strategies, even after centuries.

I wish I were on record on that, so take what I say with a grain of salt


Ultimately the greatest factor is stereotypes about inventors. The OpenAI team doesn’t remind anyone of say the Manhattan Project team in any way. They don’t look act or sound like Steve Jobs and Steve Wozniak. Elon Musk does, and that’s why I think people get so excited about rockets that land themselves. That is honestly pretty cool. Very few people pull stuff like that off. But is it less cool than GPT3?

Sam Altman and Greg Brockman were also online payments entrepreneurs like Elon Musk so it’s not like it was about their background / prior history. It’s also not about sounding too grandiose or delusional, Musk says way crazier stuff in his Twitter than Greg Brockman has ever said in his life. It’s clearly not about tempering expectations. Musk promises self driving cars every year!

So I think there are a lot of factors that impact the public consciousness about how cool or groundbreaking a discovery is. Personally I think the core problem is the contrivance of it all, that the OpenAI people think so much about what they say and do and Elon does not at all, and that kind of measured, Machiavellian strategizing is incommensurable with public demand for celebrity.

What about objective science? There was this striking Google Research paper on quantum computing that put the guy who made “some pipes” first author. I sort of understand abstractly why that’s so important but it’s hard for me to express to you precisely how big of a discovery that is. Craig Gentry comes to mind also as someone who really invented some new math and got some top accolades from the academy for it. There is some stereotyping at play here that may favor the OpenAI team after all - they certainly LOOK more like Craig Gentry or pipes guy than Elon Musk does. That’s a good thing so I guess in the pursuit of actually advancing human knowledge it doesn’t really matter what a bunch of sesame grinders on Hacker News, Twitter and Wired think.


What would be a good benchmark? In particular, is there an accomplishment that would be: (i) impressive, and clearly a major leap beyond what we have now in a way that GPT-3 isn't, but (ii) not yet full-blown AGI?


How about driving a car without killing people in ways a human driver would never kill people (i.e. mistaking a sideway semi truck for open sky)?

That's a valuable benchmark loads of companies are aiming for, but it's not a full AGI.


Maybe nothing? “Search engines through training data” are already the state of the art, and have well documented and mocked failure cases.

Unless someone comes along with a more clever mechanism to pretend it’s learning like humans, you’re not looking at a path towards AGI in my opinion.


> you’re not looking at a path towards AGI in my opinion

What I'm trying (and apparently failing?) to ask is, what would a step on the path towards AGI look like? What could an AI accomplish that would make you say "GPT-3 and such were merely search engines through training data, but this is clearly a step in the right direction"?


> What I'm trying (and apparently failing?) to ask is, what would a step on the path towards AGI look like?

That's an honest and great question. My personal answer would be to have a program do something it was never trained to do and could never exist in the corpus. And then have it do another thing it was never trained to do, and so on.

If GPT-3 could say 1) never receive any more input data or training, and then 2) read an instruction manual for a novel game that shows up a few years from now (so it can't be replicated from the corpus), and 3) plays that game, and 4) improves at that game, that would be "general" imo. It would mean there's something fundamental with its understanding of knowledge, because it can do new things that would have been impossible for it to mimic.

The more things such a model could do, even crummily, would go towards it being a "general" intelligence. If it could get better at games, trade stocks and make money, fly a drone, etc. in a mediocre way, that would be far more impressive to me than a program that could do any of those things individually well.


If a program can do what you described, would it be considered a human-level AI yet? Or would there be some other missing capabilities still? This is an honest question.

I intentionally don’t use the term AGI here because human intelligence may not be that general.


> human intelligence may not be that general

Humans have more of an ability to generalize (ie learn and then apply abstractions) than anything else we have available to compare to.

> would it be considered a human-level AI yet

Not necessarily human level, but certainly general.

Dogs don't appear to attain a human level of intelligence but they do seem to be capable of rudimentary reasoning about specific topics. Primates are able to learn a limited subset of sign language; they also seem to be capable of basic political maneuvering. Orca whales exhibit complex cultural behaviors and employ highly coordinated teamwork when hunting.

None of those examples appear (to me at least) to be anywhere near human level, but they all (to me) appear to exhibit at least some ability to generalize.


From grandparent post:

> 2) read an instruction manual for a novel game that shows up a few years from now (so it can't be replicated from the corpus), and 3) plays that game, and 4) improves at that game, that would be "general" imo.

I would say that learning a new simple language, basic political maneuvering, and coordinated teamwork might be required to play games well in general, if we don't exclude any particular genre of games.

Complex cultural behaviors might not be required to play most games, however.

I think human intelligence is actually not very 'general' because most humans have trouble learning & understanding certain things well. Examples include general relativity and quantum mechanics and, some may argue, even college-level "elementary mathematics".


Give it an algebra book and ask to solve the exercises at the end of the chapter. If it has no idea how to solve a particular task, it should say “give me a hand!” and be able to understand a hint. How does that sound?


That makes me think we are closer rather than farther away because all that would be needed is for this model to recognize the problem space in a question:

“Oh, you are asking a math question, you know a human doesn’t calculate math in their language processing sections of their brain right, neither do I... here is your answer”

If we allowed the response to delegate commands, it could start to achieve some crazy stuff.


> probably because it sucks

it's not technically bad, but it requires domain experts to feed it domain relevant data and it's as good as this setup phase is, and this setup phase is extremely long, expensive and convoluted. so yeah it sucks, but as a product.


Whenever someone talks about how AI isn't advancing, I think of this XKCD comic from not too long ago (maybe 2014-ish?), in which "check whether a photo is of a bird" was classified as "virtually impossible".

https://xkcd.com/1425/


Read the alt-text. Photo recognition wasn't impossible in 2014, it was impossible in the 1960s and the 2014-era author was marvelling at how far we'd come / making a joke of how some seemingly-simple problems are hard.


"More than 800 statisticians and scientists are calling for an end to judging studies by statistical significance in a March 20 comment published in Nature."

While their sources are supporting this statement, I'm getting mixed signals.

https://www.nature.com/magazine-assets/d41586-019-00857-9/da...

This is their primary source, of the statisticians calling to retire statistical significance. However, their primary reasoning is because statistics is misused to make erroneous conclusions. It seems like there is a lack of understanding about the philosophy and mathematics behind statistics that's the problem by its practitioners, not statistics itself.


Do they have a different bytecode and runtime from WASM? Why not unify everything to web assembly byte code?


WASM byte code is structured into a tree, which means an interpreter loop wouldn't perform well. You'd really want to flatten it into a simpler bytecode before interpreting it -- and you'd want to do other transforms on it before optimizing it.

I don't think WASM bytecode is a good format for execution, and it's only mediocre as a compilation target.


Then why not abandon wasm, and make this byte code a target for llvm?


LLVM bitcode is CPU and ABI dependent, and isn't even stable between LLVM releases.


Bad question on my part. More relevant: Why wasm? From what I gather, js seems to already have a bytecode for their JIT runtime. Why not expose that so that we can have C++ in the web?


> js seems to already have a bytecode for their JIT runtime

This is kind of a nonsensical statement. JS itself doesn't have a bytecode, it's just a specification of the language you write scripts in.

Each implementation has its own intermediate representations, including bytecode formats.

WASM is a vendor-neutral standard that is designed as a compilation target and designed with safety in mind.

SpiderMonkey/JSC/V8 internal bytecode formats are just that, compiler internals.


I asked the same question, see below for interesting thread


sigh I think the modern web was a mistake from hacks of the 90s to share documents. Can we plz get a fresh restart from scratch?


This isn’t a standard.


They are different.

Unifying is not a bad idea.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: