Hacker News new | past | comments | ask | show | jobs | submit | bArray's comments login

> I completely understand the concerns about AI potentially replacing human thinking, but what if we look at this from a different perspective? Maybe AI isn’t here to replace us, but to push humanity beyond its own limits.

"Tool AI", yes, at least in theory. You always have to question what we lose, or want to lose. Wolves being domesticated likely meant they lost skills as dogs, one of them being math [1]. Do we want to lose our ability to understand math, or reason about complex tasks?

I think we are already losing the ability to "be bored". Sir Isaac Newton got so bored after retreating to the countryside during the great plague, that he invented optics, calculus, motion and gravity. Most modern people would just watch cat videos. I wonder what else technology has robbed us of.

> If we look at the history of human progress, the emergence of tools has always made life more convenient, but it also brought new challenges. The printing press, the steam engine, and electricity have all greatly transformed society, but we adapted and thrived. Why can't AI be the same?

As long as we are talking about "tool AI", then with the above caveats, maybe. But a more general AI (i.e. AGI) would be unlike anything else we have ever seen. Horses got replaced by cars because cars were better at being horses. What if a few AI generations away we have something better than a human at all tasks?

There was a common trope for a while that if AI took our jobs, we would all kick back and do art. It turns out that the likes of Stable Diffusion are good at that too. The tasks where humans succeed are rapidly diminishing.

A friend many years ago worked for a company doing data processing. It took about a week to learn the tasks, and they soon realised that the entire process could be automated entirely in Excel, taking a week-long task down to a few minutes of number crunching. Worse still, they realised they could automate the entire department out of existence.

> The real question isn’t whether AI will replace us, but whether we are ready to use it to do things we couldn’t do or even imagined. Imagine if we didn’t see AI as something that replaces us, but as a tool that allows us to focus on doing what truly matters, leaving the mundane tasks to machines. Isn’t that the ultimate form of progress?

It could be that AI ends up doing the cool things and we end up doing the mundane tasks. For example, Stable Diffusion could imagine a Vincent van Gogh version of the Mona Lisa quickly, but folding laundry to dry, dusting, etc, remain mundane tasks we humans still do.

Something else to consider is the power imbalance that will be caused. Already to even run these new LLMs you need a decently powered GPU, and nothing short of a super computer and hundreds of thousands of dollars to train. What if future AI remains permanently out of reach of all except those with millions of dollars to spend on compute? You could imagine a future where a majority under class remain forever unable to compete. It could lead to the largest wealth transfer ever seen.

[1] https://www.discovermagazine.com/planet-earth/dogs-not-great...


I'm quite interested in the Z-bot [1]. I assume the actuators are good enough to allow the robot to be mobile, but it's missing any information about the sensors, compute, battery, etc. It's difficult to know what you are getting into.

[1] https://shop.kscale.dev/products/zbot


Yes, we plan to release the hardware spec shortly! We have only properly released our software stack so far

Any idea when that would be?

For the purpose of an experiment, I would love to see $20k also offered to eek out more performance on the dav1d decoder, otherwise this is just a measure of how much money people are willing to pour into optimisations.

There is already significant investment from AOM, Videolan, and FFMPEG funding the development of dav1d. Is there a reason that that investment is not relevant?

From what I can tell, way more than $20k has been invested into dav1d, and that investment has generated many performance optimisations already. Ostensibly, that's what this rav1d bounty is competing with.


> Is there a reason that that investment is not relevant?

I wasn't aware of it, in that case this could be a more interesting comparison.

> From what I can tell, way more than $20k has been invested into dav1d, and that investment has generated many performance optimisations already. Ostensibly, that's what this rav1d bounty is competing with.

Was rav1d a port of dav1d or developed from scratch?


Your comment reads like the school room dilemma of if your mom gives you cookies to share with your friends at lunch she better bring enough for everyone.

This is being done to prove a point, that rust is just as efficient as c while being memory safe.

Your idea is misaligned with the goals of the contest and more a moral complaint on “fairness”. Where do you get this notion that any entity offering their own funds in a contest needs to distribute it evenly?


Your reply reads as if you have skin in this game and are for some unknown reason offended by the idea of a fair test.

> This is being done to prove a point, that rust is just as efficient as c while being memory safe.

I understand that, but it could just end up being a measure of how much money we are willing to pour into libraries. If you start putting money in to any language, you should see efficiency increase and bugs decrease.

A nice result would be them being about the same speed, but Rust offering greater protections.

> Your idea is misaligned with the goals of the contest and more a moral complaint on “fairness”. Where do you get this notion that any entity offering their own funds in a contest needs to distribute it evenly?

It was an intrigue, not a demand.


I would also go the other way, you have something you want to be sent to somebody via paper, but it's only printed at the last mile in the delivery vehicle.

A birthday card for example doesn't need to be sent across the country or across the world, it only needs to become physical as close to your door as possible.

Maybe this could be a security measure too, you have a document that can only be printed by a secured machine and is only produced at the last mile based on current position. It would reduce the risk of the mail being intercepted or mis-delivered.


I can think of a few use cases:

1. Desktop - If both implementations run the same but one is faster, you run the faster one to stop the decode spluttering on those borderline cases.

2. Embedded - Where resources are limited, you still go for the faster one, even if it might one day leas to a zero day because you've weighed up the risk and reducing the BOM is an instant win and trying to factor in some unknown code element isn't.

3. Server - You accept media from unknown sources, so you are sandboxed anyway. Losing 5% of computing resources adds up to big $ over a year and at enough scale. At Youtube for example it could be millions of dollars a year of compute doing a decode and then re-encode.

Some other resistances:

1. Energy - If you have software being used in many places over the world, that cost saving is significant in terms of energy usage.

2. Already used - If the C implementation is working without issue, there would be high resistance to spend engineering time to put a slower implementation in.

3. Already C/C++ - If you already have a codebase using the same language, why would you now include Rust into your codebase?

4. Bindings - Commonly used libraries use the C version and are slow to change. The default may remain the C version in the likes of ffmpeg.


> 3. Server - You accept media from unknown sources, so you are sandboxed anyway. Losing 5% of computing resources adds up to big $ over a year and at enough scale. At Youtube for example it could be millions of dollars a year of compute doing a decode and then re-encode.

I wish big tech had to pay all the electron garbage they produce


You mean the electron app? I agree, I understand the problem they try to fix but it is such a garbage way of producing software. I have to have Discord running for some projects, and it is heavy as hell.

It'll likely be to do with financial responsibility due to where the funding comes from. They have an obligation to check that they are not sending funds to a terrorist group to solve code bounties, etc.

I would like to hear how that fluid UI works on that circular display, that looks like an absolute pain to get right. I think I could boil my brain just thinking about how they might have begun to tackle that and all the challenges they had. Just dealing with Unicode and nothing else for dynamically resizing shapes is a headache in the making.

(I would suggest to review the fruitful language of your comment...)

I always owned an older smart phone, and looking at that new UI I have no idea what some of that stuff does, because it doesn't tell me and the icons have been smoothed and abstracted so much that I can barely tell what they are for.

I think the reason why the UI's are dumbing down is because they people using them are changing. We're talking about people doom scrolling for hours watching less than 60 second clips of ADHD cocaine. We're talking about people who use an LLM and TTS to explain a paragraph of text they didn't want to read.

It feels like there needs to be a split somehow, an Android front-end for those people and a boring consistent front-end for others. Failing that, I would accept a serious Linux smart phone, but it would need decent development to actually get somewhere.


> I think the reason why the UI's are dumbing down is because they people using them are changing.

No. We are not changing. But it seems that there are people paid to make changes. Not to improve, just to change. Why does the Messages app needs a new, much worse UI, every couple of months ?


I don't know how this can be legal at all, the liabilities are against the company and not the owners - and nothing changed about the company. When they purchased the company, it's name/trademark, the customer base, software, etc, they also purchased the liabilities. The only way I can think you can avoid purchasing the liabilities is to go into bankruptcy.

If this is allowed to sit, then any small/medium tech company could promise the world to their customers, then just "sell" the company to a family member without the "liabilities" and there would be no recourse.

That all said, I'm launching my new company "infinite money glitch". For 0.1 BTC for a life time subscription we'll send you 0.01 BTC back every month. Don't worry about the sale of the company planned in a few months to my cousin, trust me bro.


I think what interests me the most about this is what exactly is the cause of the liability here, assuming that the product name and company entity wasn't sold off to the current owner.

Is it the customer database?

Is it the IP / domain of the server?

Is it the website that promised it and hosted the contract?

Is it the ownership of the app code rights?

Because if you think more about it, there is some potential gap here in copyleft licenses which might need to be fixed to protect projects against companies abusing this methodology.

Should we tie liabilities to contracts therefore to customer data instead of apps and codes of apps? Is this a glitch in the democratic law that needs to be fixed by the legislatives?

In European law liabilities are tied to the legal entities, meaning that there is a transitioning phase of 5 years of the liquidation process until an entity can be sold off by the liquidator, and within that time frame customers must file their complaints/liabilities against the legal entity if e.g. they want their money back. That is unless a judicative / court decides otherwise and puts responsibility onto the owners if there is illegal ownership behavior (e.g. fraud) that was provable.


It is possible they bought the domain name, trademarks, code, database etc. from the company but NOT the company.

However, if they have also had contracts with customers assigned to them I would have thought they would have to fulfil their side of the contract.


It's not just the domain, trademarks, code, database, etc, but also the customers and their contracts (accept the ones they didn't want). I think in a court it could easily be argued that it was a purchase of the company by a different name.

And their argument is that they were not made aware of these contracts, which to me sounds like the new owners should be suing the old owners for lack of responsible disclosure. Unless of course they signed away this right as part of the contract, or they were aware and don't have a leg to stand on.

In any case, this is super fishy.


Yes, definitely fishy, and I think you are probably right because customers had continuity of service without agreeing to new contracts.

It seems really unlikely that they have been assigned the contracts but not these particular contracts.


> If this is allowed to sit, then any small/medium tech company could promise the world to their customers, then just "sell" the company to a family member without the "liabilities" and there would be no recourse.

Yes, this is very likely the outcome. It will just be another perk in the consequence-free world of corporate governance.


The "high-end" modern MCUs are pretty great, you have the NRF offerings, but also the likes of the ESP32 where you can get Bluetooth and WiFi in a single package.

Personally these days I would lean towards the ESP32, they continue to iterate on it nicely and it has great community support. I'm personally developing a smart watch platform based on micropython.


While the ESP32 is great for many applications, it’s not for battery operated stuff. When an nRF draws 1-2 mA when using BLE, ESP32 will draw 40 mA. And the chip they selected is even more efficient.

The low power chips can also run in low power mode without BLE running using micro amps, something the ESP can’t match.

I really like ESP32 and I hope they have a low power chip on their roadmap.


Sure I agree, but the WiFi functionality is a killer feature. As mentioned in another comment, it means a smart watch stops being a smart phone addition and can actually operate as a stand-alone device.

Years ago I had a "smart" watch that had a sim card and was a full mobile phone within its own right, I think it was just 10 years or so too early.


Concur. With nRF, you have to bolt on a separate 70002 Wi-Fi Chip.

And this chip isn't a normal QSPI chip where you read the datasheet. You have to use NRF connect, and Zephyr.

So, this brings up the obvious question: What if I don't want my whole firmware to be Zephyr nRF-connect, just for a Wi-Fi chip?


Manufacturer lock-in can be quite a problem. I'm not saying the ESP32 solves this fully, but you can mix and match as you like, and it's highly encouraged. I think with the ESP32 most build upon Free RTOS but I'm not aware of a strict requirement.

Of note, you can just pull in a lib to use Wi-Fi and BLE on an ESP32 (C-3 at least). I've done it. It doesn't imply any changes to your firmware architecture. That's the part that bothers me about Nordic's approach.

pretty sure you can use 7002 with any MCUs and vice versa. I dont know what would be the technical limitation

Aren't ESP32s way more power hungry than typical BT-only parts though?

Not insanely for a smart watch. Your smart watch battery will be something like 200mAh, so for 20 hours you need to average 10mAh. With zero optimisation, screen refresh rate at 30+fps, I have smart watch chewing 30mAh.

Getting down to 10mAh is not so bad. If you're not actively driving the display, you can under-clock significantly [1], if you're not using WiFi you can turn the modem off [2].

[1] https://docs.espressif.com/projects/esp-idf/en/stable/esp32/...

[2] https://docs.espressif.com/projects/esp-idf/en/stable/esp32/...


It might be just-about acceptable for a smartwatch. But anything the micro takes out of the power budget means less screen and radio time, which does add constraints.

PineTime, based on NRF52, will get you 4-7 days of practical usage.


There are ESP32 watches. One I have[1] comes with quite thick 940mAh battery but my understanding is the battery life still isn't that amazing (just got it, haven't really tested the battery) - something like less than a day of constant runtime or few days if you turn it off constantly

[1] https://lilygo.cc/products/t-watch-s3-plus


Yeah I got one of those as well. An the older non-S3 version. Fun for developing, very powerful. I use it for developing ML applications for watches etc (emlearn project). Great device for that, but battery life is not its strong point.

I have a similar one with a microphone, I dread to think how the GPS module and LoRa of that variant affects battery life!

Can confirm, I regularly get about 9 days of charge on my PineTime, running the latest PineTimeOS release. Its gotten better and better over the years, and the functionality keeps coming ..

I used to use the PineTime with PineTimeOS, but mine eventually broke (corroded inside), but not having WiFi made it annoying to develop for. With WiFi suddenly you don't need to communicate regularly with a phone and the possibilities really open up.

I get that kind of experience with the Watchy… but the problem is, its quite a bulky device and gets a bit tiring to wear after a while.

no, esp32(the original one) is insanely power hungry, especially its radio.

Also 20 hours of runtime is horrible.


I'm getting 10 hours run time with the screen continuously running and drawing graphics every refresh in micropython - extending this time out is definitely possible.

There are many ESP32 variants, depending on what you pick some may be more compelling for your use-case.


Well, we are talking about smartwatch use case.

Even some newer variant like S3 or C6 only has acceptable power consumption, if what you are after is run-time they are not the best fit.


Does this apply to C-3 and C-5 as well?

I would not consider ESP32 high-end MCU, it still lacks many peripheral(DSP, GPU), its core clock is not high(only 240mhz iirc).

Recently they release ESP32P4, with very strong performance, but like you guess, without Radio


We are talking about an MCU, not a CPU :)

I think once we start talking about GPU, MMU, USB, display, etc, we're getting towards a CPU of sorts.

Speaking of a low-end CPU, I want to test out the RV1103 Rockchip, those crazy little chips are running Linux apparently [1], and even able to run Python [2]. Depending on power draw, a Linux-based smart watch could be on the horizon.

[1] https://www.luckfox.com/EN-Luckfox-Pico

[2] https://wiki.luckfox.com/Luckfox-Pico/Luckfox-Pico-SDK


Yes, we are talking MCU, it's very common now that mcu has gpu now.

For ex: Bes2700bp and bes2800 has 3d GPU iirc. Their spec is very impressive, too bad that their SDK is kind of limited to non-Chinese vendor


USB is trivial for most modern MCUs; even low-power/minimal-cost ones.

ESP32-S3 has all that minus a GPU. Runs Linux.

Looking at that now [1], seems like something I need to run a test with!

[1] http://wiki.osll.ru/doku.php/etc:users:jcmvbkbc:linux-xtensa...


You got downvotes but im a firmware eng and can point out more esp deficiencies. #1 completely fake FPU that they lie aboutm. #2 awful memory bandwidth. Not only slow but unpredictable. #3 small onboard memory. #4 low clock speed.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: