Hacker News new | past | comments | ask | show | jobs | submit | jacobsimon's comments login

I think the implication of the top comment is that cloud providers are buying revenue. When we say that cloud provider revenue is "up due to AI", a large part of that growth may be their own money coming back to them through these investments. Nvidia has been doing the same thing, by loaning data centers money to buy their chips. Essentially these companies are loaning each other huge sums of money and representing the resulting income as revenue rather than loan repayments.

To be clear, it's not to say that AI itself is a scam, but that the finance departments are kind of misrepresenting the revenue on their balance sheets and that may be security fraud.


So cool! I was just wondering the other day if it would be possible to build this! For front facing mode, I wonder if you could add a brief “calibration” step to help it learn the correct scale and adjust angles, e.g. give users a few targets to hit on the screen

Hi Jacob, thanks for checking it out. Regarding the calibration step for front facing mode, I'm glad you brought this up. I did think of this, because the distance from the camera/screen to the hand affect the movement so much (the part where the angle of the hand is part of the position calculation).

And you are absolutely right regarding its use for the correct scale. For my implementation, I actually just hardcoded the calibration values, based on where I want the boundaries for the Z axis. This value I got from the reading, so in a way it's like a manual calibration. :D But having calibration is definitely the right idea, I just didn't want to overcomplicate things at that time.

BTW, I am a happy user of Exponent, thanks for making it! I am doing some courses and also peer mocks for interview prep!


Oh great to hear, thanks for the kind words :)

Just bought two. Nice work!

… It somehow just dawned on me that firm is between “hard” and “soft”


Holy..


Oh, my sweet summer children…

Once you fully internalize that your buggy program is running on a buggy framework, in a buggy language, in a buggy sandbox, on a buggy virtualization, on a buggy file system, scraping along on buggy silicon running buggy microcode, managed by other buggy silicon running buggy firmware, using peripherals with their own buggy silicon and firmwares, with everything happening billions of times per second (a car engine typically doesn’t reach a billion cycles over a 20 year lifespan ), it seems mind bogglingly improbable that anything works at all lol.

The fact that it does work, and can even routinely reach 5 nines, is a testament to the generosity of the universe… and a good argument for making sure that to whatever extent possible, you should write your software to be resistant to random events and erroneous operations. Fail safe/ fail soft, fail and retry, fail and restart. The more resilience we build into our work, the less angst we create in the world.


All that being said, Holy shit, voyager. A nearly 50 year old robot in continuous operation, on its way out of the solar system into deep space. I would venture to say that this is an engineering accomplishment on par with the catch of the starship booster the other day.

It’s amazing to see both the half hour miracle of engineering and the half century one playing out at the same time. I just hope our accomplishments today will in some way be as durable as those of generations past.


Coded by buggy primates who wear malfunctioning clothing.


Mind blown too!


Pffff, 30 years of IT and I just realized that today (if the name was indeed intended; if not it is a cool coincidence)


I think you’re missing the broader point, which is that there is a lot to computer science outside of the purely mathematical formalism.

For example, distributed systems and networking are more like a physical science because they seek to make generalized learnings and theorems about real world systems.

The author’s last point around complexity theory also resonates because it demonstrates the value of designing experiments with real-world conditions like computing hardware speed and input sizes.


Distributed systems are famously hard to get right and mathematical formalism is pretty much the only way to do so at scale. Amazon found that out with S3[1]. TLA+ exists for very good reason!

That’s not to discount the reality that mapping the model to reality is hard work that needs to be done.

[1] https://lamport.azurewebsites.net/tla/formal-methods-amazon....


The topic is about theoretical computer science which I would say is a math.

The authors last point is basically like what applied math is to math. It’s applied computer science.


It is either "a kind of math", or "math", but not "a math".


Sounds ok to me in casual conversation. I use it like fruit. Orange is a fruit. Orange is also a kind of fruit. Orange is fruit doesn’t sound right though.

Empathetically speaking I’m sure it’s quite jarring for you when you read it.


A single fruit

A single math

One of these expressions doesn't make sense. So no, you cannot use "math" like "fruit".


Ok, you are right. But as long as people understand my points I’m fine with the grammatical error.


My unprofessional take: The SEC is concerned primarily with protecting investors. If anything, changing to a normal for-profit structure and removing the cap on returns would be viewed as more investor/market-friendly than their current structure, which is partly to blame for what unfolded last year.


This is the way


Was thinking about this recently. The author misses the importance of timing in the unity of perception. It would be rather easy to conduct an experiment and ask people, “did X and Y happen at the same time?” where X and Y are different stimuli. You could test over a variety of different senses and time differences to determine if people are integrating their experiences or not.

The other important element is attention and awareness. You can certainly be focused on one thing more than another—this can be a useful kind of disunity


Here's an interesting experiment you can do around retrospective perception and timing: clap your hands once, listening for the sound of the clap. Notice what it's like to remember the sound of the clap for a minute or so. Now, clap your hands twice, again listening closely and noticing what it's like to remember the sound for a little while. In the second case, is it possible for you to remember the sound of the first clap only, without an echo of the second?


One thing I noticed going through school is that math concepts are usually taught first before physics and other subjects —- precisely because the math is viewed as a prerequisite for the other material. But this always seemed entirely backward to me, because much of the math was invented for and motivated by people trying to solve actual problems in these other disciplines. I think we should teach people in the same order of operations, rather than treating math as an abstraction to be learned by itself.


I know it’s not a completely fair comparison, but to me this question is kind of missing the point. It’s like asking “Why take a cab if you know where you want to go?”


It's such a poor comparison it's ridiculous. A better analogy is "why take a cab if you know where you want to go and provide the car and instructions on how to drive"


No, it's like saying "why take a cab where you have to provide the driver so much guidance on driving as to be equal or greater than the effort of driving yourself."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: