Hacker News new | past | comments | ask | show | jobs | submit | axoltl's comments login

I frequently have to interface with custom on-the-wire protocols and the PIO block is fantastic. Situations where I'd used to need a CPLD or full-blown FPGA I can now do with a sub-$1 microcontroller. It significantly reduces development time as well.

It's like XMOS but actually usable.


The E9 errata has evolved since the early days when the built-in pull-downs were implicated. The issue is related to the input pad macro sourcing excess (120uA) current when the external voltage is between Vil and Vih. This causes the pad to float at 2.2V, whether the built-in pulldown is activated or not. The internal pulldowns don't sink enough current to force the voltage outside of the undefined range.


I really, really want to like the RP2350. The PIO machines punch so far above their weight that they could turn the chip into a killer solution in a lot of circumstances.

This single erratum breaks my intuition enough that I'm "scared" to pull the chip into a design. I understand that they don't automatically break most use cases and that there are software workarounds. But they violate a core "Hi-Z is Hi-Z" understanding that I design around, and I'd really hate to respin a board over something so dumb. On an STM32 this would have been a silicon stepping and that's what bugs me about it.


My take is that devastating errata is just normal. Most of it is fixed silently and the end users never hear about it, but sometimes you're the customer that gets to find them. Right before I joined an embedded device team at Google, they brought up a new chip. Turns out... branch prediction was broken. It would mispredict a branch and execute the misprediction anyway. I imagine that was no fun to debug. But a workaround was fine and I'm sure subsequent customers never heard about it, and the devices worked fine for many years.

One time I lived in an apartment that suffered from a variety of pests coming in. I expanding-foamed everything that looked like an entry point, the problem went away, and I stopped caring. However, I could still hear mice crawling around inside the walls late at night and decided just not to be bothered. This experience prepares one for embedded systems / microcontroller work ;)


I mean, it's normal for like half of the peripheral functionality on STM micros to outright not work. But usually that’s something relatively niche like the CAN transceiver or the DMA controller targeting a particular MMIO range or some specific mode of the external memory interface. Having an erratum affecting literally all GPIO pins is unusually scary.


I'm sad these seem to not have the fix for the E9 errata, making them entirely unsuitable for a lot of projects. I'll keep using the RP2040 instead, but the extra oomph of the M33's over the M0+'s would've been very welcome in a few places.


The errata from the datasheet: "Increased leakage current on Bank 0 GPIO when pad input is enabled" which affects RP2350 A2.

Previous discussion: https://news.ycombinator.com/item?id=41479261


I have 0 experience in ASIC design, but it's not clear to me that this is fixable at all with just metal layer changes (which are around $50K a pop) like the fixes to previous errata's of the rp2040.

And a complete set of new masks at 40nm would probably cost them around $1M which I can't imagine would make financial sense for them, unless their contract with the pad designer includes compensation for that kind of issue.

So sadly I don't think this is likely to get fixed.


$1m is peanuts if your making chips. They should have done a respin for such a fundamental issue. This errata will be one of the first things you see when you research the part even if they later fix it. You can see even here it's half the discussion so far.


I don't see how it would be 'peanuts'.

The RP2350 is 2.65x the die size of the RP2040, so about 8000 chips per 3000$ TSMC 40nm wafer gives you $~0.35 per die. Say 10 cents per die for singulation and testing, and another 10 cents for packaging and processing into reels. I have no idea what the M33 per-unit license is, but something like 5-10 cents seems reasonable, so let's assume 5.

So that's a cost of $0.60 with a reel price of $0.80 for a $0.20 gross profit per chip. So the break even point for full respin of the masks is around 5 million sold chips, 10 million chips when we're including the original masks. For comparison, according to Eben Upton, 10 million RP2040s have been made (but not necessarily sold) from 2021 to 2023.

Would a fix result in 5 million additional chips being sold? Maybe, and for all we know they could be working on doing that just now. Maybe their contract with the pad design provider even pays for a new mask set and it actually costs them nothing. Maybe all of this can be fixed with a cheaper metal layer fix.

But either way this isn't 'peanuts' and it's not a clear cut if doing it is economically viable.


> $1m is peanuts if your making chips

Having customers use an external pull-down is free (to the foundation) and costs nano-peanuts to each customer (i.e. cents per unit)


I can't really even understand what the defect is - what kind of projects does it affect? I'm only interested in fairly basic applications of the rpi but I don't have the technical chops to know if I'm setting myself up for failure...


So normally when you have a microcontroller pin and you configure it as an input, you expect it to SINK current (as in, take voltage 'in'). The bug is that if the external voltage is between 1V and 2.5V (I don't remember the exact voltages, don't quote me on that) the pin will SOURCE current, acting almost as if you'd set it to be an output pin. It's not a lot of current, but it's enough to hold the pin at 2.2V.

This happens on all microcontrollers btw. Random charge accumulates and pushes the voltage on your pin to some arbitrary point. The way you fix this normally is by providing a path for that charge to escape in the form of a pull-down resistor. Usually you need something in the 100k range. Because of this bug you need something more in the 5k range.

For some circuits that's fine, for others it's more problematic.


> This happens on all microcontrollers btw. Random charge accumulates and pushes the voltage on your pin to some arbitrary point. The way you fix this normally is by providing a path for that charge to escape in the form of a pull-down resistor. Usually you need something in the 100k range. Because of this bug you need something more in the 5k range.

Many microcontrollers provide an internal, selectable pull-up or pull-down resistor (or neither). For example, on the STM32, it's a 30-50k, and individually selectable on a per-pin basis:

https://www.keil.com/dd/docs/datashts/st/stm32f10xxx.pdf#G11...

It's normal for pins to float, it's not normal for them to both lack an internal bias option and to float to a condition where they source significant current. You don't typically put a very low impedance external pull-down resistor on every single input pin.


> when you have a microcontroller pin and you configure it as an input, you expect it to SINK current (as in, take voltage 'in').

That's not how it works; input does not mean sink current. An OUTPUT low sinks current, and an output high sources current.

In a "normal" microcontroller (with no silicon bugs), an input pin is in a high-impedance state ("Hi-Z"), meaning it doesn't sink or source current -- but it can be configured to have an internal pull-up or pull-down resistor, in which case it will (respectively) source or sink a little bit of current, enough to keep it at high or low voltage (i.e., enough for a logical high or low) unless there's something else driving it.

The problem with the RP2350 is that (under some circumstances) there's a current leakage between the pin and the voltage rail, so when a pin is configured as input with a pull-down resistor, the voltage will not go down to the low level it needs to read a logical low as expected: it will be at around 2.2V, which is in the "undefined" region.


You are, of course, 100% correct. In my haste to explain the 'sourcing' behaviour of the errata I accidentally jumped to the sinking verbiage. Input pins are "pressure gauges", not "flow meters".


Simplifed when used as digital input a GPIO pin has three voltage ranges: 0V up to the logical low threshold, VCC down to the logical high threshold, and an and undefined range in the middle. The RP2350 burns a lot of power while in this middle range when any proper MCU shouldn't, because its quite common for this to happen. The "cure" is almost as bad as the problem because to stay out of this range you have to apply a strong external pull-up/pull-down resistor. This too wastes power, just not quite as much.


In what situations is including an external pull-down resistor not a good enough 'fix'?


I haven't completely thought through this, but I can see issues with porting PIO-USB [0] for example. USB relies on a few different pulls to Vcc and Gnd with pre-defined values. On the host side you have to pull your signal lines to ground with 15k resistors. Those aren't strong enough to overcome the current leakage. The tricks where you enabled pad inputs only when you're reading the pin don't work here either as you can't do that in PIO.

Things like logic analyzers are going to have similar issues where their inputs will have to be buffered to provide a suitably low impedance path.

It's not insurmountable but it's enough for me to just fall back on the RP2040 in situations where I don't want to spend the effort to validate the RP2350.

0: https://github.com/sekigon-gonnoc/Pico-PIO-USB


It's not an issue for PIO-USB, since the pullup/pulldowns in USB are just used for device and speed detection. And for the pulldowns to do anything the pins have to be high impedance which you can't do in PIO code anyway. So just making sure to drive the pins low for 1 cycle before putting them in high impedance mode is sufficient as a workaround.

Could be an issue for logic analyzers, though usually you'd have a voltage translator in front of that anyway.


In situations where the power wasted by such a strong external pull-down or pull-up is inacceptable e.g. everything running of a small battery for more than a few hours. The resistors can easily burn more power than the whole chip.


The bottom 2 bytes of a pointer contain two bits of the slide, assuming it's even a pointer into the kernelcache itself.

I'd take half a sandwich.


Little endianness considered harmful


Yeah, you could probably contrive a situation where you get more interesting information (page numbers maybe?), but it definitely doesn't seem likely to me-


Could you elaborate on the hardware timers? How are these different from the timers that exist in the microcontroller, and what design changes are required in hardware?

Or is your point that the outputs from these timers can't be muxed to the pins that current hardware wires up?


> Or is your point that the outputs from these timers can't be muxed to the pins that current hardware wires up?

This. As far as I'm aware, other boards are just designed in the way that makes the layout easiest. The timers we're using are also fixed to specific pins with no muxxing (HRTIM on the STM32G474 specifically).


I was both excited and disappointed reading the list of hardware features. I, admittedly, have spent the last 6 years working in (not-human-rated) aerospace. Excited because the points you make on the site are definitely things that I've been disappointed to see missing on most existing hardware, and disappointed because... I wish that that list of features wasn't special at all and everyone made more robust hardware like that :)

I'm very curious about the Ada firmware and am hoping I'll have some time around Christmas break to have a dig into it. Last time I did anything with Ada was in grad school about 15 years ago and I loved it; I'm really excited to see what the bridge between Ada and a microcontroller looks like (see above... aerospace!)


> what the bridge between Ada and a microcontroller looks like

At a high level, we're just putting records with representation clauses directly on to a UART: https://github.com/Prunt3D/prunt_board_2_software/blob/maste...


Macbooks have a dedicated ALS (Ambient Light Sensor). They don't use the camera.


No firmware is required. Macbooks manufactured since 2014 turn on the LED whenever any power is supplied to the camera sensor, and force the LED to remain on for at least 3 seconds.

(Source: I architected the feature)


Thanks for your reply — yourself as the Source can only make me feel flattered then for you responding to me.

> Macbooks manufactured since 2014 turn on the LED whenever any power is supplied to the camera sensor, and force the LED to remain on for at least 3 seconds.

That convinced me originally I think, good old days! I'd almost forgotten about it. The way you phrased it, it sounded like 50% OS concern to me.

But if cam & LED rly share a power supply, and the LED is always on without any external switch, Good then!


I was not very popular with the camera firmware folks for a while. They had to re-architect a bunch of things as they used to occasionally power on the camera logic without powering the sensor array to get information out of the built-in OTP. Because the LED now came on whenever the camera was powered they had to defer all that logic.


What does OTP stand for in this case? The camera PROM??


Apologies. OTP is One-Time-Programmable. The physical implementation of this varies, in this specific case it was efuses (anti-fuse, actually). It's used for things like calibration data. For a camera it contains information about the sensor (dead pixels, color correction curves, etc.).


I worked on this feature for Apple Macbooks around 2014 as the security architect. All Macbooks since then have a camera indicator LED that is (barring the physical removal of the LED) always on at least 3 seconds. This feature is implemented in gates in the power management controller on the camera sub-board.

There's a LOT of pitfalls still (what if you manage to pull power from the entire camera sub-assembly?), this was a fun one to threat-model.


I happen to have some first-hand knowledge around the subject! In 2014 someone did a talk[0] on disabling the camera on some older Macbooks. It was fairly trivial, basically just reflashing the firmware that controlled the LED. I worked on the security team at Apple at the time and in response to this I attempted to do the same for more modern Macbooks. I won't go into the results but the decision was made to re-architect how the LED is turned on. I was the security architect for the feature.

A custom PMIC for what's known as the forehead board was designed that has a voltage source that is ALWAYS on as long as the camera sensor has power at all. It also incorporates a hard (as in, tie-cells) lower limit for PWM duty cycle for the camera LED so you can't PWM an LED down to make it hard to see. (PWM is required because LED brightness is somewhat variable between runs, so they're calibrated to always have uniform brightness.)

On top of this the PMIC has a counter that enforces a minimum on-time for the LED voltage regulator. I believe it was configured to force the LED to stay on for 3 seconds.

This PMIC is powered from the system rail, and no system rail means no power to the main SoC/processor so it's impossible to cut the 3 seconds short by yoinking the power to the entire forehead board.

tl;dr On Macbooks made after 2014, no firmware is involved whatsoever to enforce that the LED comes on when frames could be captured, and no firmware is involved in enforcing the LED stay on for 3 seconds after a single frame is captured.

0: https://www.usenix.org/system/files/conference/usenixsecurit...


There seems to be widespread anxiety regarding cameras, but hardly anyone ever talks about microphones. Are conversations not much more privileged information than potentially seeing someone in their underwear?


"All Apple silicon-based Mac notebooks and Intel-based Mac notebooks with the Apple T2 Security Chip feature a hardware disconnect that disables the microphone whenever the lid is closed. On all 13-inch MacBook Pro and MacBook Air notebooks with the T2 chip, all MacBook notebooks with a T2 chip from 2019 or later, and Mac notebooks with Apple silicon, this disconnect is implemented in hardware alone." [1]

[1] https://support.apple.com/guide/security/hardware-microphone...


That's what they said about the first gen Facetime cameras. "oooh don't worry, it's controlled in hardware!"

We have no way of verifying that anything they said in that document is true.


I'm inclined to believe it. If someone managed to prove Apple's lying about it, there would be serious reputational (and other) risks to their business. I also can't imagine how they would benefit from such a fabrication.

That said, I still use "Nanoblock" webcam covers and monitor for when either the camera or microphone are activated.


It's clear Apple define "Hardware" as "Not using the main CPU". They've pretty much admitted it's firmware based, otherwise the T2 chip simply wouldn't be involved to be mentioned.


It is implemented in dedicated small CPLD that cannot be flashed by any software means. My understanding of relation to T2/SEP is that this CPLD serves as a kind of "IO expander" for T2/SEP which also hardwires logic like this.


The T2 chip is mentioned in the quoted passage as an indicator of the architecture version, not necessarily an indicator that the T2 chip is directly involved


Obviously the camera is also 'disabled' when the lid is closed regardless of the controlling circuitry. So while that's a good feature, it's not relevant.


Depends what your threat model is?

Nobody but Abby and Ben care if Ben is caught admitting he cheated on Abby. But naked images of Abby can head off into the ether and be propagated more or less forever, turn up on hate sites, be detrimental to careers etc.

If your threat model is leaking company secrets then sure, microphone bad, as is anything having access to any hardware on your machine.

So sure, maybe people ought to be more concerned about microphones as well, rather than instead.


My point is that the threat model is backwards. The threat associated with a camera is the least severe compared to anything else a malicious person could do with access to your computer. Recored conversations, chats and email, browsing history, etc are all much more likely to result in harm if leaked than a recording of you innocently in your home.

> Nobody but Abby and Ben care if Ben is caught admitting he cheated on Abby.

That destroys families, standing within a community, and very often careers.


I don't think it is backwards, personally. The threat of public humiliation, and the capability for someone to spy on what you do in your own home, is worse with the camera.

> chats and email, browsing history, etc are all much more likely to result in harm if leaked than a recording of you innocently in your home.

This is far less of an intrusion for most people than recording what they are actually doing in their own home IRL. People know that information can be hacked, they don't expect and react quite differently to someone actually watching them.

> That destroys families, standing within a community, and very often careers.

Yes, but it doesn't stay on the internet forever in quite the same way.

Now I get to some extent what you're saying - aren't the consequences potentially worse from other forms of information leak?

Maybe. It depends on how you weight those consequences. I'd put (for example) financial loss due to fraud enabled by hacking my accounts as far less important than someone spying on me in my own home. Even if they didn't use that to then extort me, and were using the footage for ... uh ... personal enjoyment. I think a lot of people will feel the same way. The material consequences might be lesser, but the psychological ones not so much. Not everything is valued in dollars.


I think we may just be bumping into cultural differences here. I grew up in a household were being naked around family members was common. I spend time in clothing-optional spaces. I rarely draw the blinds on my windows, etc. I'm not concerned with what other people think in this way and such images could never be used to extort me. Consider the case of Germany - people there are extremely concerned about their privacy and data protection. At the same time public nudity is an entrenched cultural norm.

It's also known that people are not very good at assessing risk. People are more word about dying at the hands of a serial killer than they are of dying in a car crash or slipping in the shower. I feel you're underplaying the psychological harm of having all of your data crawled through by a creep (that would include all of your photos, sites visited, messages sent, everything).

All I can really say is that if someone gained access to my machine, the camera would be the least of my concerns. That's true in nearly every context (psychological, financial, physical, etc).


Empirically, most low level extortion does seem to be about leaking video. I would see a threat model based on 'criminal wants to extort me for money'. As more reasonable than 'creep wants to look through my computer for creeping'. And it seems like extortion focusses on video, so that is the bigger threat. Even if it is less invasive.

I presume the reason behind this is that video is much more likely to be re-shared. Sending bob a zip of someone's inbox is unlikely to be opened, and even less likely to be shared with strangers. But send bob a video of Alice, and he might open it. Heck, he might not know what the video is until he opens it. So even if he is decent, he might still see it. And if he is less decent and shares it, strangers are much more likely to actually view it.


I think extortion in the form of "I've encrypted your data, pay to get it back" is much more common. Ransomware. It's scalable, automatable. Extortion of video is harder to automate.


I think, though am prepared to be wrong, that you'll probably find yourself in the minority there.

It's not just about nudity and extortion, but someone having access to watch you, whenever they feel like, in your safe space. That sense of violation that people also feel when (for instance) they have been the victim of burglary - the missing stuff is often secondary to the ruined sense of security. There's a vast difference between leaving your curtains open and having someone spying on you from inside your own home.

Is it rational to put this above other concerns? That's a whole different debate and not one I'm particularly interested in. But it explains why people are concerned about cameras over 'mere' data intrusion.


I'm not arguing a point here, but I'm curious what the actual number of instances exist where someone is naked or in some other extortionate way (accidently of course) potentially exposed from the position of their webcam. I too would be much more concerned about my microphone, where I know one had conversations that in front of or next to my machine that I wouldn't want "out there". In terms of where my camera is, I woukd imagine they would catch me picking my nose every so often but that's about it.


People watch porn on their laptops. Even just your orgasm face would be embarrassing for most people.


> Nobody but Abby and Ben care if Ben is caught admitting he cheated on Abby.

This isn't true at all, even for private citizens. Your friends, parents, children, and colleagues are all likely to care.


It's very limited, it's certainly not going to be passed around like naked pictures could be.


Yes, photos of naked people are used to extort them (usually into just paying the holder to delete them).

https://news.ycombinator.com/item?id=42261730


This raises a different but related question. In what world should a victim of a crime be extorted for doing innocent things in their home. If a peeping tom took a photo though a window, could that be used to extort someone?

When people are extorted for these kinds of things it's usually catfishing that leads to sexual acts being recorded. That's not related to cybersecurity.


Fear of harrasment. You don't want your coworkers see you naked, do you?

edit: s/baked/naked/ :D


They are, but people aren’t scared of those because they can’t see them staring at them.


> and no firmware is involved in enforcing the LED stay on for 3 seconds after a single frame is captured.

I may be the oddball here, but that 3 second duration does not comfort me. The only time I would notice it is if I am sitting in front of the computer. While someone snapping a photo of me while working is disconcerting, it is not the end of the world. Someone snapping photos while I am away from the screen is more troublesome. (Or it would be if my computer was facing an open space, which it doesn't.)


Right, so this is all defense in depth. That LED is sort of the last line of defense if all others have failed, like:

The exploit mitigations to prevent you from getting an initial foothold.

The sandboxing preventing you from going from a low-privileged to a privileged process.

The permissions model preventing unauthorized camera access in the first place.

The kernel hardening to stop you from poking at the co-processor registers.

etc. etc.

If all those things have failed, the last thing to at least give you a chance of noticing the compromise, that's that LED. And that's why it stays on for 3 seconds, all to increase the chances of you noticing something is off. But things had to have gone pretty sideways before that particular hail-mary kicks in.


OK, but then what? Leave the LED on for 24 hours after you've captured a single frame? At that point the LED isn't really indicating camera usage because you'll just get used to seeing it on all the time whether the camera is in use or not.


A ranfom thought, that probably won't cover all cases: a second LED or a colour LED. One LED/colour indicates the camera is active, the second can be on for a longer period of time (and perhaps dim as time goes on). I prefer the second LED option since it is better for us colourblind folks, though I suspect there would be more resistance to the idea.

And, of course, covers are an option.


It's strange that none of these companies will include a closable cover for the camera. I got one aftermarket. It is very reassuring since no hacking or accidental misclicks on my part can move the cover.


I've seen HP desktops that have a closeable camera cover, and Lenovo does on some ThinkPads [1], so probably others do too. Laptops usually have very little depth available in the screen part though, which is why most laptop cameras are crappy (exceptions include Surface Pro and Surface Book, which have more depth available and so much better cameras than most, but no cover - at least their camera light is not software controlled).

[1] https://www.businessinsider.com/lenovo-thinkshutter-laptops-...


Higher end Lenovos and Dell Latitude / Precision tend to. Was one reason why I went for a Latitude 74XX rather than a 54XX or 34XX when looking at them last time.


I had a closable cover and someone shut my laptop with enough force that the cover caused the screen to break. Be careful when closing.


Was it a built-in camera cover, or a third-party one? Apple specifically (and possibly other manufacturers?) recommends against third-party covers because the tolerance is so close:

https://support.apple.com/en-us/102177


Sure, that is going.to be true for anything with moving pats. Yet I would also imagine that design and materials are a factor here. Let's face it, these covers aren't exactly common on laptops. There is probably a lack of good design practices for them.


I also purchased a cover for mine, although in a pinch, the removable stickers on fruit work well.


I have a sticky piece of post it note more or less permanently affixed over my camera.


I can remember when someone spotted tape over Zuckerberg's laptop camera. Ref: https://www.theverge.com/2016/6/21/11995032/mark-zuckerberg-...


My Thinkpad does.


Thanks, this is the reason I browse Hacker News


Thanks for posting this interesting tidbit! I find this kind of knowledge absolutely fascinating!


Thank you for your work on this! I wish some other large companies took privacy that seriously.


Thank you for doing this.


I assume you're not longer working on it, but why not just wire it so that:

- The LED is in parallel, but with the sensor voltage supply, not the chip

- Camera sensor idle voltage = low voltage for the LED (be it with stepping if needed)

- Camera sensor active voltage = high voltage for the LED (again, stepping if needed)

- little capacitor that holds enough charge to run the LED for ~3 seconds after camera goes back to idle voltage.

Good luck hacking that :)


That's basically how this works, but manufacturing electronics at a massive scale requires some more flexibility. For example, capacitors have a pretty large tolerance (sometimes +/- 20%) and LEDs have quite a bit of variety in what voltages they'll work at. So for some people the LEDs might last 3 seconds, for some they might last 5s. Using a capacitor also means the LEDs will fade slowly instead of just turning off sharply.

If the LEDs come from a different supplier one day, who is going to make sure they're still within the spec for staying on for 3 seconds?

(And yes, I have long since parted ways with Apple)

Edit:

And to add on: That capacitor needs time to charge so now the LED doesn't actually come on when the sensor comes on, it's slightly delayed!


Thank you for the clarifications. Armchair (well, workbench) engineering strikes again haha!


You can't drive an LED that way in production electronics: you need to use an LED driver circuit of some kind to ensure the LED has constant current, and also to protect against failure modes. Also, a capacitor large enough to power a daylight-visible LED for 3 seconds is not as "little" as you're thinking; there's likely not enough space in a laptop lid for one of those. A driver circuit would be smaller and thinner.

Agreed, however, that the LED should be controlled by the camera sensor idle vs. active voltage.


I've seen a million people parroting "oh now apple fixed it!" and not a single person who has actually verified/proved it. Go on, show my any third party security researcher who has verified this claim via examining the actual hardware.

You'll pardon us all if we don't really believe you, because a)there's no way for any of us to verify this and b)Apple lied about it before, claiming the LED was hard-wired in blah blah same thing, except it turned out it was software controlled by the camera module's firmware.


I'd love for a third party to verify the claim! I'm just giving you an overview of the work that went into making this a thing, knowing full well you have absolutely no reason to trust me.

The LED being "hard-wired" is a tricky statement to make, and I actually wasn't aware Apple has publicly ever made a statement to that effect. What I can say is that relying on the dedicated LED or "sensor array active" signal some camera sensors provide, while technically hard-wired in the sense there is no firmware driving it, is not foolproof.


> Apple lied about it before, claiming the LED was hard-wired in blah blah same thing, except it turned out it was software controlled by the camera module's firmware.

Source?


Two modes are described, one is direct acoustic interference. That one's an active mode where sound waves are cancelled out and the fabric effectively is "just" a speaker. The second is a passive method where the sound is dampened by way of sinking the current produced by the piezo fiber.


Unfortunately the passive method is not very effective at blocking sound, 75% is only about 5 dB.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: