Hacker News new | past | comments | ask | show | jobs | submit | jimiasty's comments login

Founder of Estimote, Inc. (YC S13) here — we do beacons.

In Project Aria video, they claim to have installed beacons at an airport to enable indoor location, only to dismiss it as something that "doesn't scale."

Instead, they say they "trained" an AI model using vision from glasses, allowing for vision-based localization.

So, here’s an honest question: which approach is actually easier, more cost-effective, and energy-efficient?

1) Deploying 100 or even 1,000 wireless, battery-operated beacons that last 5–7 years—something a non-tech person can set up in a day or two.

2) Training an AI model for each airport, then constantly burning compute power from camera-equipped glasses or phones that barely last a few hours.

Thoughts?


> So, here’s an honest question: which approach is actually easier, more cost-effective, and energy-efficient?

Really it's more like three questions.

1. Easier? I guess that depends how you define ease, but it largely depends on what resources you have available to you. If I'm Meta and I already have a ton of compute and AI training expertise but don't have relationships with all of the airports, stadiums, etc., their approach is probably easier. You'd have to spin up new teams of people all over the world to get beacons everywhere you want them.

2. Cost-effective? I don't know enough about the costs of your solution to give an accurate answer here, but again it just seems like they're probably already spending resources training models on a huge number of images of the world, so maybe not a lot of incremental cost here.

3. Cost efficient? I would assume your approach wins here.


In my experience with a mesh wi-fi project, physical devices come with real world physical side-effects: accidents happen, devices go offline, or get stolen, or knocked off the walls/shelves, a physical location needs to be negotiated with the space owner (less of a problem if the number of venues is in the hundreds as we have business people to handle those at scale), dust, water, heat, animals, etc.

It's not a big problem if you want to equip one venue or a couple, but scaling the operation means these side-effects scale too, and we had to work on solutions to handle those, rather than working on our core competency of mesh wi-fi. Unsurprisingly the project was scrapped despite being technically feasible on a small scale - we had a couple of sites.

Virtualizing a physical space gives you more flexibility. It keeps most problems in the software engineering space and limits physical requirements (eg someone might still need to walk around an airport to update the model, but I can't think of any other major ones).

That said, AI is sexy (right now), Meta is heavy in the MR space and the tech is reusable, even if it's not the most energy-efficient solution.

(disclaimer: just my personal ramblings, I don't work on project Aria)


Getting permission to install hardware is a lot harder than not getting permission to install hardware. It isn’t the hardware that doesn’t scale, it’s the people.

I've used beacons a lot in installations. I found their reliability was a bit over-promised [1]. If you want to know whether a user is within a 4 metre sphere, in a time window of about 5 seconds, then it's fine. But don't hope for anything more precise than that; the false positives/negatives aren't great.

A large part of the variation I found was due to how individual users held their phones, and the resulting signal attenuation.

[1] https://hackaday.com/2015/12/18/immersive-theatre-via-ibeaco...


was that Bluetooth or UWB? Cause that's like saying VHS vs. 4K

And here's an honest answer - it is likely to be option 2.

In over a decade of indoor robotics I have _never_ seen a beacon-based solution that practically scales (even marker-based solutions are challenging). And it's not because the tech is even bad – it's just that any process that involves _installing things_ is a PITA and wildly more expensive and time-consuming that it should be.

This kind of sucks but it is an unfortunately reality, in my experience at least.


> Training an AI model for each airport

This is where I think the gap is. We only have to train one model for all environments, not per

How do the costs compare for training one big model vs installing billions of beacons?

Also consider the pace at which model sizes, training, and operating costs are falling


With beacons, you need to install something—-with glasses, you don’t.

With glasses, you can map the space while identifying POIs—-with beacons you can’t.

Unfortunately, no one really cares about energy use.


It depends on scale need to achieve. 1000 beacons easier for one town scale, but train model for each airfield is Earth scale(in 1990 in US was ~6000 airfields, whole Europe have less number of airfields).

Also exist some nuances, as some cities are flat but others have large hills, so need to place few beacons on sides of hill (rough surface need much more beacons).

Practically, I have experience in project to deploy LoraWan network in large city Kiev, and one concurrent bought research from cellular network planners and for first look they drawn ~300 access points to have more than 99% coverage.


1) Is a $1-$20M business requiring "humans in the loop" deploying, monitoring, and maintaining beacons with a single purpose, getting past lots of "humans with opinions" on "aesthetics" and "not in my back yard".

2) Is a $1-??? business requiring a few dedicated nerds working on CV with inf more applications and doesn't require "invading" physical buildings you dont own.


Well judging from consumer VR people will pick inside out tracking over beacons most of the time.

The headset needs the inside out tracking anyway to draw specially locked virtual objects.

To create a fresh spatial anchor at home on mobile hardware is maybe 1 second of compute time. But that doesn't really matter because the anchors would be shared across every user and computed offline beforehand.

As far as scaling the device itself can be used to crowdsource these anchors so it's not even close that the visual solution wins out.

That said beacons are probably better for supporting handset platforms. Powering up modern cell phone cameras to use AR is pretty slow and tedious for the user.


They already have perfected vSLAM system for their VR headsets. Feature point extraction -> ego motion derivation -> environmental mesh reconstruction.

The data will all be relative to initial positions and it will have drifts, but how those affect your research goals will be use case dependent(esp. since this is pitched for researches than as ready to go entertainment).


I'm a little shocked by the use of beacons outside of manufacturing or logistics or robot safety contexts.

Anything you want to track in the meat realm, especially a place like an airport, the airtag or google equivalent mesh networks are going to be far more dense than your beacons and last forever with no power required.


> 2) Training an AI model for each airport, then constantly burning compute power from camera-equipped glasses or phones that barely last a few hours.

It's their purpose in VR/AR to have cheap indoor location, for them it's one more step in that direction. Eventually they will achieve doing it with little compute.


When Valve came out with their VR headset that had base stations, everybody thought that’d be the holy grail, that you can never achieve better localization and tracking without base stations, and a base station free method can never be better than that.

Well, Meta poured a shit ton of money into making Quest base station free and they got there. We use to use valve setup for our robotics applications but we swapped it out with Quest cause honestly Quest was as good but much more easy to setup and operate.

The bitter lesson is that don’t bet against data or compute. Also, I don’t think you’d have to train a AI model for each location at every time in the future. Things get more efficient, etc.


> which approach is actually easier, more cost-effective, and energy-efficient?

I think you are asking the wrong question. The right question is: "Which approach will people use?"

Doesn't matter if it is the easiest cheapest most energy efficient thing, if people don't use it.


AI will get faster and more energy efficient over time. Deploying physical hardware will never improve in any meaningful way that fixes the biggest problem, deploying X amount of things everywhere you need it. Its a non-starter.

Why not just set up QR codes that link the location to your phone that the glasses can scan instead of a beacon? You could just as many as you want and slabber them on the wall.

> 100 or even 1,000

There are many single airports with more than 100 points of interest. Now extend that to every US state...


My answer is why not both? Is the end goal energy efficiency or making a product that works?

3) paint some qr codes on walls/signage to help make 2) easier?

do we really still train model for EACH airport?

using wifi routers?

Hi HN, this is Jakub, one of the founders of Estimote Inc. (YC S13).

After almost 10 years of research and development, it is finally here - SpaceTimeOS, a new operating system for the physical world from we have been working on after we graduated YC.

With our new sensors (UWB/BLE/LTE) placed in the corners of any room and it will create a digital twin of any physical business and store it in the cloud where real-time position of all objects/people/vehicles is visible with inch-precise accuracy.

This digital twin can be manipulated with using JavaScript where developers can create automation, visualize and in general mold this digital replica of a physical world/business.

Happy to answer any questions...


Hi HN,

this is Jakub, founder of Estimote, Inc (YC S13). Happy to share we have just launched a new device that has BLE, LTE-M/NB-IoT and GPS integrated.

It can compute its indoor and outdoor position and last years on the battery.

It is fully programmable using JavaScript and simple Web IDE.

Happy to answer any questions here!


Hi HN, This is Jakub, founder of Estimote.

We just launched a new Asset Tracking API. Bluetooth beacons attached to walls can now scan and locate smaller beacons attached to objects and pass that location data to the Cloud via low-power mesh network they create.

Via API it is possible to access quasi-real-time location of these assets on the floorplan.

If you have any questions we are around.


Hi HN, This is Jakub, founder of Estimote, Inc.

We have just released and published to GitHub https://github.com/Estimote/iOS-Indoor-SDK an update to Indoor Location SDK. It does use sensor-fusion built-it into ARKit, thus dramatically improves accuracy.

It is also possible to keep session persistence, so all mobile users in the same location will see the same virtual objects via AR mobile apps.


Please note our Location Beacons with UWB have two radios: - Bluetooth (BLE) that is low-power and can give few meters positioning accuracy - Ultra-wide-band (UWB) that's not low-power, but can give inch-precision.

This Robot Operating System (ROS) SDK does use both radios. We use BLE and Bluetooth mesh to power UWB that are nearby to preserve energy of the system.


John,

our beacons are commercially deployed in many verticals. SDK together with Cloud does allow some provisioning and security mechanisms.

Since beacons broadcast public Bluetooth signals we don't want any app/device to know their location in a venue that is not provisioned for the application.

For example you don't want AliBaba mobile app to know you are in Best Buy, next to console games.

That's why we do have optional encryption mechanism and our ToS prohibits reverse engineering our SDK.

Our customer have access via APIs to almost all raw data, so if they want to improve accuracy for their application they can do it.


Hi Hacker News,

this is Jakub, co-founder of Estimote, Inc.

Earlier this year we have added additional UWB radio to our BLE beacons. We did that mostly for floor-plan auto-mapping (read more here: http://blog.estimote.com/post/154460651570/estimote-beacons-...)

Many people asked us if it is possible for robots/drones/AGVs to connect to UWB beacons and get few-inches precision location positioning, so we have decided to release a ROS package.

You can install it on your Raspberry PI, connect to UWB beacons and start locating your robot/device.

We are around - let us know if you have any questions?


Hi HN,

this is Jakub, founder of Estimote, Inc. (YC S13).

We just released a new beacon firmware supporting low-power routed mesh over BLE.

It's slightly different implementation that the one Bluetooth SIG has just standarized, but it will give you a good overview what is possible with Bluetooth Mesh.

If you have beacons already just upgrade the firmware and here is a nice tutorial: https://developer.estimote.com/managing-beacons/mesh-at-scal...

Feel free to post any questions - happy to answer them here.


Hi HN, this is Jakub, founder of Estimote (YC S13). We just presented during CES Bluetooth Discover Blue event our new product: Location Beacons with ultra-wideband (UWB) radio built-in.

Thanks to time-of-flight technology beacons know the distance between each other and using Bluetooth mesh pass these data to cloud creating a floor plan automatically.

On top of that indoor location apps can be created. Happy to answer any questions here.


Time-of-flight is like knowing (dt_[1, 1] = 0,) dt_[1, 2], dt_[1, 3,], .. dt_[2, 3], .. dt_[n, n] for all inter-beacon times, and I'm assuming bluetooth signal travels through air in essentially constant time (doing any muxing of bluetooth "channels", if such a thing need exist, to prevent "overlap" of bandwidth?).

So do you have some kind of convex hull program that finds a shape of (dx = v * dt), where any predicted dx_[1,3] >> dx_[2,3] + dx_[1,2] implies some line-of-sight obstruction between beacon 1 and beacon 3? Especially when combined with dB of signal strength versus mere "dx" quantity alone?

I guess I'm wondering what the typical resolution is of a floor plan, how many beacons are necessary, and what kind of algorithm can crunch all those numbers into a neat path-planning-type solution.

edit: I'm quite the fool this morning, having not read the post before commenting! Nicely done. Looks like a sweet product. Hope you're able to grow the company!


The time-of-flight is actually possible because of UWB radio. It's an additional chip to Bluetooth that can estimate distance between nodes with inch precision. Then using Bluetooth mesh these data are passed to other beacons and to phone where automapping is performed. Once we know location of nodes standard indoor positioning with Bluetooth beacons is performed.

Bluetooth range is aprox 100m and UWB range 70m, so that's the maximum distance between nodes. The more nodes you have the more accurate the shape is.

For a 1000 sqf office you would neeed probably 12-20 beacons. For a retail store more than a 100.


Can you explain the pros and cons of using RFID labels with fixed geolocations vs your product?


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: