Hacker News new | past | comments | ask | show | jobs | submit login
Forget Apple Vision Pro – rabbit r1 is 2024's most exciting launch yet (techcrunch.com)
20 points by akmittal 4 months ago | hide | past | favorite | 24 comments



Rabbit R1 is an interesting idea, but in practice I don’t see how it’s the killer solution here. It seems like a feature that should be built into existing ecosystems, not a standalone device. I’m not sure what moat they’ve created here that Google or whoever couldn’t just gobble up into their assistants soon.


Teenage Engineering has an aesthetic energy to it I would be surprised if FAANG can easily duplicate


But I don’t need aesthetics, because I don’t need hardware at all. Your phone should be able to do this already, with the added benefit of already having all your apps that you use connected in the first place.


This gets to the heart of the conundrum to me.

The whole point of this interface is to be minimal, to be less, to be un-intensive & simple. Its a pleasure that it's a embodied in a high end crafted object, but also, so what?

Meanwhile the actual object of concern, what we interface with, is highly virtual. And it's incredibly hard to judge from afar what Rabbits offering is here. It's a huge unknown quantity how much this would really help in life. Especially weighed against whether it's justified as yet another object to carry.


Rabbit has done some really smart business choices- contracting / partnering with teenage engineering, having a low entry cost, no subscription, partnering with perplexity ($200 worth of credits) feels like this thing is a "no brain purchase", which is a great thing to pull off.

Fwiw, I did not buy one.

I feel like no one really knows whether rabbit will be useful to them yet (a lot of hype) and VR, though it's cool, people who bought early know it's almost always on the shelf (after the honeymoon period).


I pre-ordered one... it seemed interesting as a concept and it was cheap enough to be in "toy" territory. Worst case I have a device to hack on with a really stylish case.


Was thinking the same thing when I pre-ordered mine. Would be interesting to see what they let us do with the device.


Too cheap, too expensive and neither necessary or really likely to be any better than anything that we already do. I'll wait both of these out I think.


Those two products have nothing to do with each other, so the mention of Apple Vision Pro looks like clickbait, and the article is an obvious puff piece that makes Teenage Engineering sound like a much bigger deal than it is in reality. Is the author related with someone there?


My impression is that the author is mistaking the future they think should happen with the future that will happen. I think that it’s more likely folks become more internet obsessed and disembodied, so to speak, even though they might say they want more tech that helps them remain present in the real world.


> The premise of the rabbit r1, in case you missed it, is that it does most, if not all, of what your smartphone can do, but it uses AI to accomplish all the tasks in response to natural language queries. So that could be playing music, booking airfare, providing directions, hailing a ride, ordering food, translating in real time and much more.

I fail to understand why this has to be a device separate from a mobile phone. Once AI tech can actually do this (which is not now) it will be added to all mobile phones.

AR on the other hand is about something very different: improving our visual. Its in the path of bigger screens everywhere. Mobiles, TV's, all the screens are growing and we keep having more of them. One final immersive screen that is always with us makes sense. It will take a while and much more tech though. But Apple is probably the one company to pull that off.


Gonna be interesting to see how Rabbit works out, promising no subscription raises the question of how they're going to pay inference costs.

It's clear they'll have to go back on that promise at some point, either locking functionality or better results behind a paywall and when they do a vocal percentage of their customer base will feel slighted even if the base functionality remains in tact.

Ordered one cos, well it's cheaper than the Vision Pro carry case in my country...

The Vision Pro I'm sorry but shipping the base model of a 3500 dollar device with 256gb is so cynical and seems like you're just making e-waste so that people spend the extra dollars for the 512. The pitch is it's a professional spatial computer right... so why be stingy other than the hope most will pay more once they hit the store page, pure Cook-era Apple.


>raises the question of how they're going to pay inference costs

All their AI services are online, so they get plenty of user data to sell.


An “off-phone” whose features will likely be integrated into Google Assistant by end of year at which point they’ll likely shut down the whole thing?


So much rabbit hype when it should just be an app.


Well if you tried to launch an app first no one would care.

I view the hardware as expensive marketing for the eventual app release.


What a waste of resources.



I bought one - batch 6, so it’ll be at least mid year, but international so maybe longer.

I was interested in it anyway but as a perplexity pro subscriber already, it made sense to get a year added on, along with a free(ish) gadget to play with


This article is either a paid ad, or a joke. The rabbit does nothing that an Apple Shortcut can't do. It calls the OpenAI API and shows the response.


The company has put an insane amount of money towards ad spend, I wouldn't be too surprised if many of the 4 dozen hype articles are paid for by that budget.


Who is this for? I'm a middle aged software developer living in the US. It's not for me.


I mean, I’m equally uninterested in either of them. That said, the Rabbit is closer to something I could envision being successful. (Except that I hate voice UIs; even if they understand the words I say and what I mean, I don’t want to have to talk to something or listen to a response to get anything done.) But their interaction model is so slow and gives so little actual feedback, I can’t imagine anyone really using it, and even for what little it does, it has to be crazy expensive to operate.

I think passive AI assistants that watch our lives, take notes, manage schedules, remember things, etc, is probably where “real” AI could be the most beneficial for personal technology, but it’s also clear that current LLMs are just not up to the task for that sort of work, which requires multimodal input, huge amounts of context, actual data storage and retrieval, and no hallucinating.


The R1 is a solution looking for a problem...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: