Hacker News new | past | comments | ask | show | jobs | submit | hansihe's comments login

$375 per box doesn't seem bad to me at all when you probably only need a couple per school?


Yeah, i feel like currently they are at about the price of camera traps 10 years ago. There is very little mass-manufacturability to them right now (it's all open source and made from off-the-shelf parts) but later if we can find more funding, we are going to make a design more for manufacturing which should hopefully drive the costs down even more! :)


> There is very little mass-manufacturability to them right now (it's all open source and made from off-the-shelf parts)

This is the obstruction to using them in an educational setting. If they were available for $600+ each but already completely built (minimal DIY), they would be more likely to get into (some) schools.


OTOH, it'd a fun science project just to built one, for maybe a different set of kids that operate the box.

Just needs motivated teachers, if you ask me. I assume the mothbox is more of a high-school project, building one seems on that level as well.


We have a group of kids in Rhode Island building some with the library there! Part of a "Wildlives" program where the kids also learn to put camera traps around the local nature!

Def just needs motivated teachers!


totally! Right now we are just trying to get them out and tested on science projects around the world, but hopefully we can find funding to make more designs that could be manufactured in bulk (like the audiomoth and groupgets) and have even more of these things out and about!


It's still open source and actively maintained by Apple, they use it internally.

https://github.com/apple/foundationdb


It is now. There were a few years where it had basically disappeared (2015-2018). When Apple eventually put it back in the open-source world, it was done with little fanfare so it could be easy to miss.


> put it back in the open-source world

Just to clarify - FoundationDB was never open source before 2018. Binaries were available under certain conditions, but no source.


Edge TPUs are definitely not comparable to the datacenter TPUs. They only support TFLite for one.


Google Coral Edge TPUs have found a practical niche in low-power OSS Fargate NPU appliances, e.g. object recognition for security camera feeds.


Didn't they abandon edge TPUs?


Any references on that? For a couple of years, they were fetching 100% price premiums on eBay, due to high demand and low supply.



Helpful thread, thanks: Google support team churn after the distribution transition to Asus IoT, Frigate devs were preparing to fork Google repos, then new Google devs appeared.

> Google is getting back on top of things aka coral support which is nice.. it seems that the original devs weren't on the project and new devs needed to be given notice. Hopefully this continues and things are kept up to date.. updated libcoral and pycoral libraries are coming as well.

It's good that Frigate brought attention to languishing Linux maintenance for Coral. Rockchip 3588 and other Arm SoCs have NPUs, which will likely be supported in time, but each SoC will require validation. Coral Edge TPUs were a convenient single target that worked with any x86 and Arm board, via USB or M.2 slot.


I don't know about this at any detailed level, but doesn't designing standard cells for leading edge nodes involve a lot of trial and error? Is a lot of the issues that can occur even well understood to the level that it can be simulated?

With the approach you mention, would it involve creating "custom standard cells", or would the software allow placement of every transistor outside of even a standard cell grid? If the latter, I would have trouble believing it could be feasible with the order of magnitude of computing power we have available to us today.


The best results will be with custom shapes and custom individual placement of every transistor outside standard cell but within the PDK rules. Going outside the PDK rules will be even better but also harder.

The trial and error you do mostly by simulating your transistors which you than validate by making the wafers. You can simulate with mathematical models (for example in SPICE) but you should eventually try to simulate at the molecular, the atom/electron/photon and even at the quantum level, but each finer grained simulation level will take orders of magnitude more compute resources.

Chip quality is indeed limited by the magnitude of computing power and software: to design better (super)computer chips you need supercomputers.

We designed a WSI (wafer scale integration) with a million core processors and terabytes of SRAM on a wafer with 45 trillion transistors that we won't chip into chips. It would cost roughly $20K in mass production and would be the fastest cheapest desktop supercomputer to run my EDA software on so you could design even better transistors for the next step.

We also designed a $800 WSI 180nm version with 16000 cores with the same transitors as the Pentium chip in the RightTo article.


Has this WSI chip been taped out/verified? I must admit I am somewhat skeptical of TBs of SRAM, even at wafer scale integration. What would the power efficiency/cooling look like?


The full WSI with 10 billion transistors at 180nm has not been taped out yet, I need $100K investment for that. This has 16K processors and a few megabyte SRAM.

I taped out 9 mm2 test chips to test transistors, the processors, programmable Morphle Logic and interconnects.

The ultra-low power 3nm WSI with trillions of transistors anda Terabyte SRAM will draw a megaWatt and would melt the transistors. So we need to simulate the transitors better and lower to power to 2 to 3 terawatt.

There is a youtube video of a teardown of the Cerebras WSI cooling system where they mention the cooling and power numbers. They also mention that they also modeled their WSI on their own supercomputer, their previous WSI.


This sounds exciting but the enormous and confusing breadth of what your bio says you are working on, and the odd unit errors (lowering "a megawatt" to "2 to 3 terawatt), is really harming you credibility here. Do you have a link to a well-explained example of what you've achieved so far?


Have to agree. It's fine to have past achievements in the bio I guess but if you are looking for money it doesn't hurt to appear focused.


https://spectrum.ieee.org/1-bit-llm could lower power consumption of data centers.


Are you concerned that going away from standard cells will cause parametric variation, which reduces the value proposition? Have you tested your approach on leading FinFET nodes?


It's probably more of a node thing than a fab thing. You would have a much easier time getting the fab to do random stuff for you on a legacy node compared to a leading edge node.

Leading edge nodes are basically black magic and are right on the edge of working vs producing broken chips.

You as a customer would never want to be in a position where you are solely responsible for yields.


This is really cool! Do you have some more example images somewhere?


Thank you, yes you can try it directly (https://invertornot.com/docs) or you can either upload an images or spam the button to get random images and see their prediction.


For cases like this, you just need to convince it that it would be inappropriate to generate anything that does not follow your instructions. Mention how you are planning to use it as an avatar and it would be inappropriate/cultural appropriation for it to deviate.


> We are writing to inform you that we have discovered two Home Assistant integration plug-ins developed by you [...] that are in violation of our terms of service

> Specifically, the plug-ins are using our services in an unauthorized manner, which is causing significant economic harm to our Company.

> We take the protection of our intellectual property very seriously and demand that you immediately cease and desist all illegal activities related to the development and distribution of these plug-ins.

They seem to be threatening legal action because he is violating their terms of service? This doesn't make much sense to me


IANAL, but this sounds like bullshit intended at scaring people into compliance. An individual cannot afford legal fees against a corporation no matter what. The usual.


It looks like it is using a service running on Haier's servers, which is probably what brings terms of service into this.


I was confused by this article in the beginning, it does a pretty bad job at drawing a distinction between the pure quadlet example at the start and the example of using CoreOS to build and launch a VM that starts containers.

The basic usage of podman quadlets is putting an `app.container` in `/etc/containers/systemd/` containing something like the first snippet and then starting the unit. For someone familiar with systemd, this seems very very nice to work with.


I'm still not clear whether quadlets are a feature of Podman or systemd...

The reliance on systemd is an issue on its own. Much has been said about its intrusion in all aspects of Linux, and I still prefer using distros without it. How can I use this on, say, Void Linux? Standalone Podman does work there, but I'm not familiar if there were some hacks needed to make it work with runit, and if more would be needed for this quadlet feature.


I mean the best part about open source and Linux is that you have choice. Do you want to run an OS devoid of SystemD? Fine. Will you be going against the tide and leaving a large part of the ecosystem behind? Yup.

I’ve chosen to embrace systemd and learn it as it is the defacto standard it seems rather than fight what I think is a futile war against it. That being said. I won’t force you to use it if you don’t. But I do not see quadlets using systemd as a failing.


systemd is extremely intrusive. anything dependent on it is a failure.


This is such a bizarre comment. I run systemd a number of Linux machines currently but does that mean they are failing? Is taking advantage of systemd's features a failure? They run and do their function so in what sense are they failing or what does failure mean?


By my calculations, considering much of the world runs on RH/Ubuntu/Debian, all of which use systemd, things depending on systemd are far from being a failure, cos they'll run on the majority of systems.



systemd is a festival of power centralization and bad design. quite opposite of what linux should be.


"quadlets" are podman using systemd's extension mechanism [systemd.generator(7)] to create systemd services that invoke podman to run containers, based on the files you drop into /etc/containers/systemd.


While that's true, you really only need to blast out RGB data to get an image on screen. Most of what you are talking about is layered on top and optional.

I did a tiny HDMI implementation in an FPGA for a project, the TMDS implementation was what took the longest.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: