Hacker News new | past | comments | ask | show | jobs | submit | zachbee's comments login

Yes, it would be a paid role. But I also don't necessarily want my job to be in jeopardy if my boss has a bad breakup.


That’s something you can’t control


If they're doing inference on edge devices, one challenge I see is protecting model weights. If you want to deploy a proprietary model on an edge AI chip, the weights can get stolen via side-channel attacks [1]. Obviously this isn't a concern for open models, but I doubt Apple would go the open models route.

[1] https://spectrum.ieee.org/how-prevent-ai-power-usage-secrets


Nobody is taking particular care protecting weights for edge class models


I'm curious what the motivation is here -- unfortunately, the dev blog is all in Chinese and I can't read it. If it's mostly to show a proof-of-concept of LLMs on a FPGA, that's awesome!

But if this is targeting real-world applications, I'd have concerns about price-to-performance. High-level synthesis tools often result in fairly poor performance compared to writing Verilog or SystemVerilog. Also, AI-focused SoCs like the Nvidia Jetson usually offer better price-to-performance and performance-per-watt than FPGA systems like the KV260.

Potentially focusing on specialized transformer architectures with high sparsity or significant quantization could give FPGAs an advantage over AI chips, though.

Not to toot my own horn, but I wrote up a piece on open-source FPGA development recently going a bit deeper into some of these insights, and why AI might not be the best use-case for open-source FPGA applications: https://www.zach.be/p/how-to-build-a-commercial-open-source


AMD hasn't shipped their "high compute" SOMs, so there is little point in building inference around it. Using programmable logic for machine learning is a complete waste, since Xilinx never shied away from sprinkling lots of "AI Engines" on their bigger FPGAs, to the point where buying the FPGA just for the AI Engines might be worth it, because 100s of VLIW cores packs a serious punch for running numerical simulations.


AMD actually also does inference on PL, with reasonable commercial success, actually. Have a look at FINN.


There has been a some recent investigations into bitnets (1 or 2-bit weights for NNs including LLMs) where they show that a 1.58 bit weight (with values: -1,0,1) can achieve very good results. Effectively that's 2 bits. The problem is that doing 2-bit math on a CPU or GPU isn't going to be very efficient (lots of shifting & masking). But doing 2-bit math on an FPGA is really easy and space-efficient. Another bonus is that many of the matrix multiplications are replaced by additions. Right now if you want to investigate these smaller weight sizes FPGAs are probably the best option.

> High-level synthesis tools often result in fairly poor performance compared to writing Verilog or SystemVerilog.

Agreed.


I'm curious, do you have any intuition for what percent of the time is spent shifting & masking vs. adding & subtracting (int32s I think)? Probably about the same?


The blogs in Japanese not Chinese.


The big challenge when it comes to using FPGAs for deep learning is pretty simple: all of that reprogrammability comes at a performance cost. If you're doing something highly specific that conventional GPUs are bad at, like genomics research [1] or high-frequency trading [2], the performance tradeoff is worth it. But for deep learning, GPUs and AI ASICs are highly optimized for most of these computations, and an FPGA won't offer huge performance increases.

The main advantage FPGAs offer is being able to take advantage of new model optimizations much earlier than ASIC implementations could. Those proposed ternary LLMs could potentially run much faster on FPGAs, because the hardware could be optimized for exclusively ternary ops. [3]

Not to toot my own horn, but I wrote up a blog post recently about building practical FPGA acceleration and which applications are best suited for it: https://www.zach.be/p/how-to-build-a-commercial-open-source

[1] https://aws.amazon.com/solutions/case-studies/munich-leukemi...

[2] https://careers.imc.com/us/en/blogarticle/how-are-fpgas-used...

[3] https://arxiv.org/abs/2402.17764


Are you trying to scare people away from FPGAs? GPUs aren't actually that _good_ at deep learning, but they are in the right place at the right time.

You can rent high end FPGAs on AWS, https://github.com/aws/aws-fpga there is no better time to get into FPGAs. On the low end there is the excellent https://hackaday.com/2019/01/14/ulx3s-an-open-source-lattice...

Modern FPGA platforms like Xilinx Alveo have 35TB/s of SRAM bandwidth and 460GB/s of HBM bandwidth. https://www.xilinx.com/products/boards-and-kits/alveo/u55c.h...


If I remember correctly about 80% of a modern FPGA's silicon is is used for connections. FPGA have their uses and very often a big part in them is the Field Programmability. If that is not required, there is no good reason another solution (ASIC, GPU, etc.) couldn't beat the FPGA in theory. Now, in practice there are some niches, where this is not absolutely true, but I agree with GP that I see challenges for deep learning.


An ASIC will always have better performance than an FPGA, but it will have an acceptable cost only if it is produced in a large enough number. You will always want an ASIC, but only seldom you will able to afford it.

So the decision of ASIC vs. FPGA is trivial, it is always based on the estimated price of the ASIC, based on the number of ASICs that would be needed.

The decision between off-the-shelf components, i.e. GPUs and FPGAs, is done based on performance per dollar and performance per W and it depends very strongly on the intended application. If the application must compute many operations with bigger numbers, e.g. FP32 or FP16, then it is unlikely that an FPGA can compete with a GPU. When arithmetic computations do not form the bulk of an algorithm, then an FPGA may be competitive, but a detailed analysis must be made for any specific application.


I'm definitely not! I'm a hardware designer and I work with FPGAs all the time, for both work and for personal projects. Like with all things, there's a right tool for every job, and I think for modern DL algorithms like Transformers, GPUs and AI ASICs are the better tools. For rapid hard prototyping, or for implementing specialized architectures, FPGAs are far better.


Large fast FPGAs are great but very expensive, small size slow FPGAs are not practical for most solutions, where ARM controllers are used, significantly cheaper.


Cost and practicality are context dependent.


500GB/s is going to limit it to at best 1/4 the DL performance of an nvidia gpu. I’m not sure what the floating point perf of these FPGAs are but I imagine that also might set a fundamental performance limit at a small fraction of a GPU.


Well I keep seeing all models quantized and for 2-bit, 4-bit and 1-bit quantizations I had good very good inference performance (either througput or latency) on CNNs and some RNNs on Alveo boards using FINN (so, mostly high level synthesis and very little actual fpga wrangling). No idea about the current status of all these, will read the paper though :-)


$300 board (I'm including shipping and customs) is not low end. Low end FPGA boards are ~30$ these days.


There are two other problems with FPGAs:

1. They are hard to use (program). If you're a regular ML engineer, there will be a steep learning curve with Verilog/VHDL and the specifics of the chip you choose, especially if you want to squeeze all the performance out of it. For most researchers it's just not worth it. And for production deployment it's not worth the risk of investing into an unproven platform. Microsoft tried it many years ago to accelerate their search/whatever, and I think they abandoned it.

2. Cost. High performance FPGA chips are expensive. Like A100 to H100 price range. Very few people would be willing to spend this much to accelerate their DL models unless the speedup is > 2x compared to GPUs.


FPGAs are also reasonably good at breadboarding modules to be added to ASICs. You scale down the timing and you can run the same HDL and perform software integration at the same time as the HDL is optimized.

Much cheaper and faster than gate level simulation.


Every couple of years I revisit the FPGA topic, eager to build something exciting. I always end up with a ton of research, where I learn a lot but ultimately shy away from building something.

This is because I cannot find a project that is doable and affordable for a hobbyist but at the same time requires an FPGA in some sense. To put it bluntly: I can blink a LED for a fiver with a micro instead of spending hundreds for an FPGA.

So, assuming I am reasonably experienced in software development and electronics and I have 1000 USD and a week to spend.

What could I build that shows off the capabilities of an FPGA?


I work at one of the big 3 FPGA companies, so I can give you an idea of where our teams spend most of their time, and you can translate that into a hobbyist project as you will.

1. Video and Broadcast. Lots of things to be done here. New protocols are being introduced every year by IEEE for sending video between systems. Most cutting-edge cameras have some sort of FPGA inside doing niche image processing. You can get a sensor and build yourself your own Camera-on-Chip. It's a fantastic way to lose a year or two (I can attest to that). Some good material on the matter here: https://www.mathworks.com/discovery/fpga-image-processing.ht...

2. Compute Acceleration. This is more data centre-specific. SmartNICs, IPUs and the like. Hard to make a dent unless you want to spend 200k on a DevKit, but you could prototype one on a small scale. Some sort of smart FPGA switch that redirects Ethernet traffic between a bunch of Raspberry Pis dependent on one factor or another. One company that comes to mind is Napatech. They make a bunch of really interesting FPGA servers systems: https://www.napatech.com/products/nt200a02-smartnic-capture/

3. Robotics and Computer Vision. Plenty of low-hanging fruit to be plucked here. A rediculous amount of IO, all needed to work in near realtime. Hardware acceleration kernels on top of open standards like ROS 2. I always point people in the direction of Acceleration Robotics' startup in Barcelona for this. They're epic: https://github.com/ros-acceleration

4. Telecomunications. This is a bit of a dark art area for me, where the RF engineers get involved. From what my colleagues tell me, FPGAs are good for this because any other device doesn't service the massive MIMO antenna arrays besides building custom ASICs, and the rate of innovation in this area means an ASIC made one year is redundant the next. Software-defined radios are the current trend. You could have fun making your own radio using an FPGA: https://github.com/dawsonjon/FPGA-radio


Reasonably experienced and 'a week' can mean vastly different things... It's certainly easier to keep the cost down with longer time-frames.

For a focus on electronics rather than implementing some kind of toy 'algorithm accelerator', I find low-hanging/interesting projects where the combination of requirements exceed a micro's peripheral capabilities - i.e. multiple input/output/processing tasks which could be performed on a micro individually, but adding synchronisation or latency requirements makes it rather non-trivial.

- Very wide/parallel input/output tasks: ADC/DACs for higher samplerate/bitdepth/channel count than typically accessible with even high-end micros

- Implementing unique/specialised protocols which would have required bit-banging, abuse of timer/other peripherals on a micro (i.e. interesting things people achieve with PIO blocks on RP2040 etc)

- Signal processing: digital filters and control systems are great because you can see/hear/interact with the output which can help build a sense of achievement.

When starting out, it's also less overwhelming to start with smaller parts and allocate the budget to the rest of the electronics. They're still incredibly capable and won't seem as under-utilised. Some random project ideas:

- Driving large frame-buffers to display(s) or large sets of LED matrices at high frame rate - https://gregdavill.com/posts/d20/

- Realtime audio filters - the Eurorack community might have some inspiration.

- Multi-channel synchonous detection, lock-in amplifiers, distributed timing reference/control,

- Find a sensing application that's interesting and then take it to the logical extreme - arrays of photo/hall-effect sensors sampled at high speed and displayed, accelerometers/IMU sensor fusion

- Laser galvanometers and piezo actuators are getting more accessible

- Small but precise/fast motion stages for positioning or sensing might present a good combination of input, output, filtering and control systems.

- With more time/experience you could branch into more interesting (IMO) areas like RF or imaging systems.

With more info about your interest areas I can give more specific suggestions.


Good list, thanks. I have a couple of years professional experience as a software dev and worked in the embedded space too. Nowadays I am in security and that is definitely an area of interest.


I only dabble with recreationally reverse engineering industrial/consumer grade HW and following blogs/conferences, so I can only provide a rough shotgun of search terms to try and hit something you're interested in:

- The Glasgow interface explorer is an example of a smaller FPGA making interface level RE tooling more accessible.

- The Chipwhisperer hardware has a focus on power supply glitching, side-channel attacks and general hardware security education/testing.

- There's a handful of FPGA-based implementations intended for high-speed protocol sniffing/MiTM (TCP/IP, USB and CANBus are both pretty common) on github etc, Cynthion is one example.

- Some recent projects have been trying to implement and improve the FOSS ARM Cortex programming and trace experience, Orbuculum ORBTrace probe is an example though the benefits aren't fully realised yet.

- In an odd use-case for an FPGA, I've personally seen hardware that enforces brutal/paranoid DRM/licencing via customised downloaded bitstreams to guards against reverse-engineering/copy efforts, all to most likely run a soft-CPU. I've read (unsubstantiated) that this approach appears on some military hardware.

- Slightly adjacent to specific FPGA projects, but the SDR tooing ecosystem has lots of cool stuff to play with for wireless signal identification/spoofing/re-implementation. HackRF, LimeSDR, GNUradio etc. If you want to get deep then there's lots of overlap with custom FPGA implementations.


Thanks a lot. This is a rabbit hole I will happily go down.


I was part of a startup that did ternary CNNs on FPGA in 2017. It involved a ton of nitty gritty work and massive loss of generalit, and in the end a Raspberry Pi could solve the same problem faster and cheaper.


What about FPGAs as a means to experiment in ML and hardware architectures?


Um, no? The actual problem is that most FPGAs already have DPUs for machine learning integrated on them. Some Xilinx FPGAs have 400 "AI Engines" which provide significantly more compute than the programmable logic, the almost 2000 DSP slices or the ARM cores. This means that the problem with FPGAs is primarily lack of SRAM and limited memory bandwidth.

https://www.xilinx.com/products/boards-and-kits/vck190.html


If I was developing an AI app, I'd care about quality first before speed. And the open-source models just aren't as good as the closed ones.


Totally saw this one coming! [1]

I think one major challenge they'll face is that their architecture is incredibly fast at running the ~10-100B parameter open-source models, but starts hitting scaling issues with state-of-the-art models. They need 10k+ chips for a GPT-4-class model, but their optical interconnect only supports a few hundred chips.

[1] https://www.zach.be/p/why-is-everybody-talking-about-groq


This research is awesome. And I think organic semiconductors are super promising, especially looking at the success of OLED.

But the IEEE article is missing the biggest pitfall of this tech right now: it only works well at very low temperatures (80K and lower).


To be fair, they do sort of mention it:

> At low temperatures, the single-molecule device shows a strong change in current with only a small change in gate voltage

But you’re absolutely right that saying “low temperature” when talking about 80K is a huge understatement.


"Smart home" has really fallen flat across the board, from smart speakers to the Roomba. It would be great if Apple changed that, but I'm not optimistic. I just don't think people really want smart home devices all that much.


I think the issue is that smart home devices didn't do anything I couldn't already do, but a physical switch is faster than arguing with the computer.

Hey Google, can you turn off the baby bedroom light?

I'm sorry, I don't know that device.


I used to have a dozen or so Google Home devices for all sort of automation, but have mostly given up on it. I feel like Google is going to kill these any day now, the Google Assistant on them has been getting dumber and dumber. Where in the past, it would do its best to provide an answer, now for 99% of queries it just says "idk how to help with that."

So the only thing these devices are used for these days in my household are setting alarms and turning lights on/off. In the next home, probably won't even bother.


I've totally disabled my entire smart home ecosystem. Philips hue thankfully has a hubless fallback (with the Zigbee remote). I just hate the unreliable nature of wireless devices, and having to manage an account for every accessory (Philips hue account, wemo account, iCloud account, etc.). Matter/thread which was supposed to be a smart home revolution turned out to be a crapshoot requiring proprietary Thread border routers.


Probably A/B testing but mine answers most questions these days, I swear even those it couldn't answer in the past. It hasn't given me the I don't know answer in a long while. Still mostly used for lights, morning alarms, shop opening hours, asking if my dog can eat my snack and the weather.

I know others here saying they can do it themselves but nothing beats asking google to turn off the lights in the house when I've forgotten and I'm cosy in bed already.


There's also profiteering and security issues. Thermostat that wants to connect to "cloud" and needs to know my street address, my name etc.? -- Not happening. Companies selling "smart" devices instantly overcomplicated relatively simple appliances making them beyond DIY repair level, and, on top of it, wanted to sell service of supporting these overly complicated devices. It's just very hard to see the benefit, when all that eg. the thermostat does is turning the boiler on and off, saving a few seconds to someone who'd otherwise have to do it manually.

There also aren't that many areas where automation could possibly accomplish much. I think, the main directions are:

* Optimize energy usage (the same thermostat thing). It doesn't really amount to much. It could be useful in industrial setting, but for households it just doesn't save that much money, even if it works well.

* Cleaning. Making roombas deal with furniture or large objects left on the floor seems like mission impossible without a significant change in approach. Similarly for surfaces that are above the floor (desk, shelves etc.) Cleaning the exterior could be its own an quite an interesting thing though. Stuff like removing dead foliage from the roof for instance, or repainting the walls.

* Cooking. This could be potentially interesting, but will probably require a complete redesign of the tools used for cooking today to be reasonably priced. Eg. there would be no need for knives with handles for humans, because it's easier to make a slicing / chopping machine that has a very different configuration. Stoves and ovens would need to have some way of moving pots in and out automatically. Also, they'd probably have to be connected to the fridge and other kitchen storage... Which, in the end, means that it's not going to be an incremental upgrade. It will be also probably difficult to make the automated system coexist with human cooks...


> it just doesn't save that much money, even if it works well

You should see how wasteful typical American households are when they use a dumb thermometer. The best energy-saving feature is simply at-home vs away-from-home detection. I don't want my HVAC at home to run when I'm away at work or worse away at vacation, unless the temperature is really extreme. This easily saves me hundreds of dollars for a month-long vacation.

> Making roombas deal with furniture or large objects left on the floor seems like mission impossible

Roomba the company hasn't innovated in years. Switch to a different brand like Roborocks. Also don't choose models with a camera for privacy and performance reasons: lidar is much better.


High end robot vacuums have been innovating (including Roomba), with self-emptying bins and ML-based obstacle avoidance for things like cords and pet waste. This requires a camera, so any high end robot vacuum is going to have one. For me, those features are worth it.

The other big area of innovation is combo-vacuum+mop, including automatic water replacement and cleaning, but those features don’t seem to be fully ready for prime time yet. Roomba is behind the curve on this one.


ML-based obstacle avoidance does not require a camera. The latest products consciously avoid cameras to assuage fears that the images captured could be sent to the cloud.

https://www.technologyreview.com/2022/12/19/1065306/roomba-i...


It may not require a camera, but in practice I think most or all units use them. It’s definitely not a Roomba-specific issue.

I’m familiar with the “woman on a toilet” story and I think it’s overblown. It was a prototype unit used for training the ML model, not a consumer unit.


How does the thermostat know you are going on a month long vacation?


This is why it’s requirement that all “smart” devices in my house can fall back to “dumb” use. Smart switches are the way forward here. They work just the same as existing switches, but I can also control them via automation, voice, or an app.

I normally recommend people start their home automation journey with smart bulbs, ideally in their bedroom so they can speak their lights off and on while in bed, but long-term, switches are the best.


> I think the issue is that smart home devices didn't do anything I couldn't already do, but a physical switch is faster than arguing with the computer.

Exactly this.

I've had so many "aw fuck it I'll do it myself" moments with tech.

I no longer use any smart home devices. It was a passing fad as far as my experience goes.

Seemed cool, didn't really change my life.


As another commenter alluded to, good Smarthome tech becomes an appliance, which isn’t a desirable high margin business, and bad tech becomes a nuisance which is also not a desirable business.

That’s why Google is slowly getting out of it and shifting their focus from Nest. That’s why Apple never did much beyond a few speakers, and it’s why Amazon is right at home in that business (but even they’re getting out of the money-losing voice assistant aspect).

Robotics… seem like a miss to me. Very few tasks at home are as simple as vacuuming, but maybe I just lack the creativity and vision. Apple surely has some great tech left over from the car R&D so who knows. Apple is unfortunately not great at a “communal” perspective when making things.

I think a big issue no one talks about will be robot storage/garages. It’s already an issue for Roomba and anything bigger will be a no-go for many households. That is probably apples best chance - make it pleasant to look at and a status symbol.


> Robotics… seem like a miss to me. Very few tasks at home are as simple as vacuuming, but maybe I just lack the creativity and vision.

SwitchBot is working on something I haven't seen elsewhere yet: Making robot vacuums double as mops is becoming common, and and a few have modifications to hook into the water/waste lines so you don't have to refill it, but SwitchBot made that as part of its primary design because they got the idea to use the robot to ferry water around other places: it can automatically refill their humidifier and empty their dehumidifier.

I could imagine further enhancements for watering plants, or maybe if it's a success a future one that cleans rugs may become feasible.


> Robotics… seem like a miss to me. Very few tasks at home are as simple as vacuuming

Even vacuuming was pretty hard. Most vacuum robots were a disappointment until maybe 2 or 3 years.


Oh absolutely, it just has the advantage of being a well know and relatively simple machine (the vacuum), that is expected to roll across the floor, and is expected to avoid household objects instead of interacting with them.

Almost all other tasks either operate at human-hands level (significantly bigger robot) or need intimate understanding of you and your home (eg picking things up off the floor) or need a ton of dexterity (folding clothes). Or all three.


I like smart devices for their automation potential. I use smart plugs to turn on and off lights, coffee makers, grow lights, heating mats. The ability to quickly program a plug to turn on every morning at 10am and turn off again at 10pm is valuable to me. It's even more valuable if you get into hobbies like aquarium keeping where you can automate lights and fish feeders.

Yes, you can do all of these things manually, but are you good at keeping a flawless schedule? It may not matter if you forget to turn on the coffee maker but it matters a lot if you forget to feed the fish. And you won't always be available to handle these things every single day, unless you work from home and follow an extremely rigid schedule.


I'm curious about the coffee maker bit. I drink espresso and I can't really imagine a benefit in turning something a device on if someone isn't there to also put coffee in it, pour it etc.

How much time does it save you having your coffee maker or before you go to get coffee?


I drink espresso as well. It takes 1/2 hour to warm up the machine after I turn it on. Having it turn on and be ready for me when I want to make a coffee saves 1/2 hour. Plus I have time-of-use billing for electricity so having it turn on early in the morning during off-peak hours saves money! It takes more energy to heat the thing up to operating temperature than it does to maintain that temperature all day.


Ah yes, makes sense. I use a stovetop pot on an induction plate. It heats up super fast.


The thing with a fish feeder is that it can be completely analog and still work fine.


Only if you're feeding once a day. You often want to feed 2 or even 3 times a day with a smaller amount at each feeding time. This tends to lead to less wasted food.


You've made a stunning point. There's no analog way to make things happen more than once a day. The logic is clear and irrefutable. Well done.


You feed fish during the day when the light is on. If you feed them 3 times a day and the light is on for a 12 hour period, that’s once every 4 hours 3 times followed by 12 hours of rest. What analog timer can do that?


A 12 hour one with stops every four hours. Reset the timer when you fill the feeder in the morning.

If this was an interview question I would reject the job. You don't seem worth working with.


That’s not going to work if you’re going on a vacation for a week. You want to fill the feeder with a week’s worth of food at a time and have it feed every day according to the schedule. Control the power to the feeder with a smart plug and set the feeder’s internal timer to 4 hours.


I have a Roomba, it works pretty well for doing 'maintenance level' vacuuming--keeping the level of cat hair to a manageable level, etc. For the most part, robot vacuums have succeeded in becoming boring, which is what I really want in an appliance.


I want a smart home, but I also do not want to send my data to anyone else and I don't really have the time to cobble something together.

If apple can refrain from sending the data to icloud or any other servers, then I would be very interested.


That exists today. Apple's Homekit is the only real solution for a smart home that doesn't send everything to/from the cloud.


I'm trying to update to v2 of HomeKit in the Home app but it won't let me unless I login to iCloud. From what I recall v2 of HomeKit relies on a hub spoke model, as opposed to v1 which relies more on broadcast packets.


> ”“Smart home" has really fallen flat across the board,… I just don't think people really want smart home devices all that much.”

Or are these devices just so common, unremarkable, and ubiquitous now that you just aren’t noticing them anymore? I can’t think of any of my friends and family who don’t at least have some smart speakers and smart lighting devices in their home.

Smart door locks that you can open with your phone, and smart door bells and security cameras that you can monitor remotely are becoming pretty common too.


I think it's that most actually useful smart home stuff was very quickly saturated. It is great to be able to adjust my lights easily. It is great to turn my TV off in the other room. It is great to have a robot vacuum.

But a smart coffee maker? A smart clock? A smart dishwasher? All this garbage ended up being gimmicks and it ran out of steam so quickly.

I hope the things that are useful continue to get support as the big players abandon smart home expansion.


I really do want smart home devices! My biggest issue so far has been stability and interoperability issues between each vendor and system. Those things have gotten better, but are still a headache. Apple is in a pretty good position to solve those pain points (at the cost of buying new devices (Apple brand or Apple Certified maybe?). Or maybe I need to dive deeper into HomeAssistant...


> Or maybe I need to dive deeper into HomeAssistant...

This is the answer.

HomeAssistant is fantastic and has unified everything of mine into a single platform. Control mostly happens through Google and Apple/Homekit devices (other than hardware switches), and everything works pretty seamlessly.


I thought I would be into smart home devices, but the companies fucked up by turning them into privacy and security liabilities, with poor interoperability and likelihood of turning into junk when the company goes under.


From what I hear, it's a nightmare to get Siri to know what light you want to turn on. Unfortunately it seems like apple is trailing the pack in the smart home area.


In terms of mental development Siri, is at best, a 5 year kid. It has never been useful to me.


HomeKit and HomePods are such a mess, I don’t think Apple has it in them.


I have the feeling that basically all current smart home products are either a disappointment in their limitations or useful but really time consuming.

The only smart appliances I got are Philips hue lights which are nice but well, after the initial discovery, I use them as classical light bulbs 99% of the time. I’ve found zero useful automation (not saying they don’t exists) and I can’t see why I would control lighting from my phone (at least not enough to justify spending hundreds into smart bulbs and smart switches).

Ultimately, I’m not against smart home but since each home is unique, by definition, those objects are only useful if the user is willing to invest enough time to tailor the configuration to be useful in his own unique house.

Oh I thought about ranting about my experience with Sonos speakers which are really nice speakers with great audio quality for the money and size and everything you’d want except the "smart" part you are forced to use with their terrible (and closed) software.


Sonos would like a word.


I'm not holding my breath after seeing their stock price.


This doesn't appear to support any FPGAs other than their FPGA chiplet [1]. Also, like UncleOxidant said, the most complex parts of this toolchain are just existing open-source tools (Yosys and VPR).

This seems like a useful toolchain for ZeroASIC customers who are using their hardware, but not for FPGA enthusiasts more broadly.

[1]. https://www.zeroasic.com/chiplets/fpga


Even so, it is very commendable for them to provide an open-source toolchain for their hardware, unlike the other vendors.

At least for me, this is a strong argument for choosing their products, where appropriate.


I think it supports whatever SiliconCompiler supports (which is both FPGA and different ASIC flows) and the EBRICK example is showing how to put that design onto an ASIC or FPGA for testing.

Edit: Here's the SiliconCompiler list: https://docs.siliconcompiler.com/en/stable/user_guide/introd...


yea this caught me too. that online digital twin is pretty slick tho


Commoditize your complements, baby.


There are meaningful size and performance challenges to implementing post quantum crypto, but I don't think "oh no, HTTPS handshakes are 20% slower!" is something we should actually worry about.

The real worry is all of the small, secure, embedded devices that literally don't have enough memory or compute to run these algorithms at all. Even the state-of-the-art hardware-accelerated implementations of PQC use a ton of memory and a ton of silicon area, to the point where they're untenable for lower end devices.


> but I don't think "oh no, HTTPS handshakes are 20% slower!" is something we should actually worry about.

Why not? TLS handshakes are real. That said, we have pretty restrictive TCP windows from 20+y ago that might need a bump-up at som point.

But importantly, we should not be making too many assumption on use-cases, as if stateful connections like TLS is the holy grail. Small and fast crypto enables new use-cases - engineering standards should always take perf & overhead into account, and not focus only on existing use.

> The real worry is all of the small, secure, embedded devices that literally don't have enough memory or compute to run these algorithms at all.

Indeed! And more overhead increases surface area for DOS attacks on “high-end devices” as well. So there are already clear examples of how these would break important things.

The reason I’m skeptical is precisely that QR today has known, significant setbacks, but unknown benefits.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: