Hacker News new | past | comments | ask | show | jobs | submit login

How is OpenAI misleading? The entire post on OpenAI is about the physics of the problem (different sized cubes, materials, etc.)

When I saw the press release I understood the demonstration was about hand dexterity and not trying to use AI to solve a Rubik’s Cube pattern. That would be overkill IMHO. You don’t need a neural net to solve it and I never thought OpenAI was trying to mislead.

Side note: One of the commenters on the Twitter thread referred to Marcus as the James Randi of AI in jest. I worked for Randi for several years handling the Million-Dollar Paranormal Challenge and investigating unusual claims. I can tell you a lot about misleading claims...




Because this has become a pattern in OpenAI's MO.

Take a very impressive research achievement (large LSTM on byte-level language modeling/GPT-2). present it in a hyperbolic manner ("we've discovered a single neuron that captures sentiment", "the full GPT-2 is too dangerous to release"). wait for the press to eat it up, and if the technical press calls them out on misleading claims, even better, because it'll get even more traction then. Wait for defenders to show up stating that the original research achievement was impressive. Make no effort to clarify misleading claims.

The misleading word here is "solve", which can have two meanings: to derive a solution for a Rubik's cube, or to manipulate a Rubik's cube into a solved state. The casual reader absolutely assumes the former (which also appears as a challenging, intellectual task), whereas the technical achievement here is the latter. But of course, a press released titled "Solving Rubik's Cube with a Robot Hand" sounds much more impressive than "Manipulating Rubik's Cube with a Robot Hand".

I say this as a person who has benefited from their great research output and models: please stop playing this terrible PR game. You do your research work a disservice by muddying the waters like this.


"Too dangerous to release" itself seems like a somewhat misleading summary of OpenAI's position? Figuring out how to do AI research responsibly is a core part of their mission, not a PR game they're doing just to make headlines.


GPT2 had no LSTM


> Because this has become a pattern in OpenAI's MO.

I wonder if this is because it Elon Musk’s MO. It seems exaggeration and hyperbole is what he does.


Elon Musk hasn't been involved in OpenAI for a while now.


And landing rocket on boats before refitting and relaunching them into orbit, but yeah... totally hyperbole.


I think he's more likely referring to Elon Musk's predictions about Autopilot and Tesla. Elon Musk has been promising "full self-driving" capability for years now (by which I mean he has already missed his first public deadline by years). I also find The Boring Company to be a little overly ambitious, as well as many other public comments Musk has made over the years.

I don't think there are many people saying Elon Musk hasn't done some absolutely amazing things. But he's famous for being late to deliver on just about everything, and some of his comments about self-driving cars are considered truly pie-in-the-sky by experts.


More like a parabola, right?


> How is OpenAI misleading? ...When I saw the press release I understood the demonstration was about hand dexterity and not trying to use AI to solve a Rubik’s Cube pattern.

Uh... the headline of the article was literally "Solving Rubik’s Cube with a Robot Hand".

Kudos to you for apparently reading past the headline and understanding that the demo was actually about hand dexterity. But come on. The average layman reading the headline is going to believe that what's newsworthy is that OpenAI solved a rubik's cube.

If OpenAI didn't intend for that to be the case, they should have used different words. For example, "Using a neural net to achieve a breakthrough in robot hand dexterity" would actually describe what the demo is about.

Unfortunately that is about 1000x less interesting than "AI robot solves rubik cube". Which is why OpenAI didn't choose that headline, and why people like Gary Marcus are criticizing them for being misleading.


> Kudos to you for apparently reading past the headline and understanding that the demo was actually about hand dexterity. But come on. The average layman reading the headline is going to believe that what's newsworthy is that OpenAI solved a rubik's cube.

I criticize OpenAI's aggressive marketing as much as the next guy, but I don't actually feel this was one such case. I only ever assumed from the headline + video that they were using neural nets to control the hand, not to solve the Rubik's cube.

I'm not an average layman, though, so YMMV.


> Uh... the headline of the article was literally "Solving Rubik’s Cube with a Robot Hand".

Which part of that statement isn't true?

You read that headline one way. Myself another. When someone actually reads the article (a lost art now) you get full context in case there was any confusion.

I'd argue that calling OpenAI misleading (in this instance) is even more misleading. From Marcus's Tweet I assumed that he had problems with OpenAI's actual claims. Nope. He just didn't like the headline.


More like 1/1000x, there are a bunch of rubik’s cube solvers on github.

Dexterity to manipulate a Rubik’s cube is really incredible, especially though the entire sequence of solving it. It’s a very well-chosen dexterity challenge.

This whole criticism is bizarre


The claim "I solved a Rubik's cube using a computer" is totally uninteresting.

The claim "I solved a Rubik's cube using a neural network" is different, and much more interesting than the first claim.


It really isn't. It's quite obviously possible and not worth the effort if you happen to know anything about Rubik's cubes and neural networks.

In fact, I would be much more interested in "I solved a Rubik's cube using a computer" because you can then talk about the mathematics of a Rubik's cube (presumably the algorithm used is a human-comprehensible one), while for "I solved a Rubik's cube using a neural network" the only sensible question is "and how badly did you have to overfit to do that?"


You can assume the "computer" solution is overfitted, too. There's no reason to, because general methods are well-known, but it's even easier to just hardcode a cube and the list of moves that solves it than it is to implement one of those general methods.

Why assume that "I solved a Rubik's cube using a neural network" guarantees that I cheated?


Yeah, because what they did is actually 1000x less interesting than actually solving the task, not just it "sounds" less interesting.


It really feels to me like Gary Marcus is being a pedantic contrarian, particularly after reading that reddit thread.


From what I can gather, he's a proponent of some other approach to AI (symbolic, maybe?) with a long-running grudge against deep learning.


Both of them (openai and marcus) routinely make more noise than needed


I felt that the communication in PR was not clear. Even about this, the cube was heavily instrumented.


(I work at OpenAI.)

Per https://news.ycombinator.com/item?id=21306452, we have results for both instrumented and uninstrumented cubes!

Our cube variants are listed in the blog post ("Behind the scenes: Rubik’s Cube prototypes"), and results are in the paper in Table 6.


According to the Table 6, the performance of not instrumented cube is 20% for applying half of a fair scramble and 0% for a full scramble. Right?


Yes — but note that "success" is a not-very-granular metric (which we set for ourselves!), as it means fully unscrambling the cube without dropping it once.

To be clear, that means executing up to 100 moves without a drop. If you put the cube back in the robot's hand, without any additional effort it'll continue solving unfazed.


There was no uninstrumented cubes. What OpenAI claims to be a "regular Rubik's cube" is in fact not regular, but has color stickers cut. OpenAI couldn't get regular Rubik's cube working.


The video they released hinted at instrumenting but I thought it was for validation purposes only. Interesting, thanks.


Can you point to some particularly amusing writeups, please? I've found the existence of the prize has helped persuade some of my less skeptical friends against pseudoscience.

https://en.wikipedia.org/wiki/One_Million_Dollar_Paranormal_...


There used to be newsletter called SWIFT where Randi would write-up various attempts. I left the organization over a decade ago and have no idea what happened to those articles since then. The current website only goes back five years. Yikes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: