Hacker News new | past | comments | ask | show | jobs | submit | restricted_ptr's comments login

I wonder if ESP32 has VLIW slots and a tighter instruction packaging is possible?


Neither Xtensa nor RISC-V are VLIW architectures.


Xtensa architecture is flexible and extendable by the user. Ability to define new instructions, hw features and VLIW configurations are some of the key features. You can find more details on the internet https://en.m.wikipedia.org/wiki/Tensilica


I don't think that applies to the ESP32 family of devices. I've never heard of DSP hardware onboard them.

I think the comment you're referring to is talking about the architecture in general, but not the silicon we're discussing here.


ESP32 ee.* operations in assembly look pretty much like aliases for a VLIW bundles, on the same cycle issuing loads used in the next op while also doing multiplication on other operands. This is not a minimal Xtensa. They might not have the Tensilica toolchain for redistribution to use these features freely but apparently they exposed these extensions in their assembler in some form.


Generally speaking, this is not correct. Base Xtensa is not VLIW, but Xtensa's various vector extensions do allow VLIW instructions, collectively called "FLIX."

It is doubtful that ESP32's Xtensa is VLIW-capable, though. Presumably their compiler would emit FLIX instructions if it were.


I am speculating, but I'd expect that Google's reach outside of USA is much larger than the Yandex's reach outside of Russia. In 2021 Alphabet had 46% of revenue originated from USA. If applied withing the country borders this brings the Google's GDP percentage number down to 0.393%. I am not sure what percentage is for Yandex in Russia but it doesn't look like Yandex is used outside of Russia much.


It's not clear if it helps in the long run, it probably does but a lot of friendly fire should be expected. Just recent example where Ukrainian anti aircraft units were taken for Russians and killed in Kyiv (the one where strela 10 vehicle collided with a car under fire) shows the danger.


How about quantization? Does tensorflow lite perform quantization or is it tensorflow supposed to do it? Is it iterative process or straightforward? Or are you training quantized models as nn api docs say?


The quantization is done with a special training script that is quantization aware. We will be open sourcing a mobilenet quantized training script to show how to do this soon.


Yep, enjoyed this one.


They used customizable DSPs from Tensilica. I wonder if this is based on the same technology.


You're right, Microsoft revealed last year in a Hot Chips presentation that HPU v1 is a Cadence Tensilica DSP with custom instructions [1]. Given that, I'd bet that HPU v2's neural net core is Cadence's "Vision C5" [2].

[1] http://www.tomshardware.com/news/microsoft-hololens-hpu-arch...

[2] https://www.cadence.com/content/cadence-www/global/en_US/hom...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: