Hacker News new | past | comments | ask | show | jobs | submit | maep's comments login

Shoutout to h0ffman, in my opinion the best contemporary junglist on olschool hardware. His 8-Bit Jungle music disk: https://www.youtube.com/watch?v=--J66FY7qro

Perhaps it's Gwynne Shotwell's doing. She seems to be one of the few people on this planet who can say "no" to Musk and not get bullied.


She is responsible for the success of a company that's becoming increasingly strategic to US interests. Nobody will mess with her, not even Elon.


"Inferior" is relative. The main focus of LC3 was, as the name suggests, complexity.

This is hearsay: Bluetooth SIG considered Opus but rejected it because it was computationally too expensive. This came out of the hearing aid group, where battery life and complexity are a major restriction.

So when you compare codecs in this space, the metric you want to look at is quality vs. CPU cycles. In that regard LC3 outperforms many contemporary codecs.

Regarding sound quality it's simply a matter of setting the appropriate bitrate. So if Opus is transparent at 150 kbps, and LC3 at 250 kbps thats totally acceptable if that gives you more battery life.


Regarding complexity, do you have any hard numbers? Can't find anything more than handwaving


I remember seeing published numbers based on instrumented code, but could not find it.

I did a quick test with the Google implementation (https://github.com/google/liblc3) which is about 2x faster than Opus. To be honest, I expected a bigger difference, though it's just a superfical test.

A few things that also might be of relevance why they picked one over the other:

  - suitability for DSPs
  - vendor buy-in
  - robustness
  - protocol/framing constraints
  - control


Thanks for checking it, appreciated

- Well, 2x is nothing to write home about.

- DSP-compatibility probably considered but never surfaced as a reason, so hard to guess investigation results. + Pricing and availability of said DSP modules

- Robustness - well, that's one of the primary features of opus, battle tested by WebRTC, WhatsApp etc. (including packet loss concealment (PLC), Bit-Rate Redundancy (LBRR) frames)

- Algorithmic delay for opus is low, much lower than older BT codecs, so that definitely wasn't a deal breaker

- Ability to make money out of standard is definitely important thing to have


If used in a small device like a hearing aid, a 2x factor can have a significant impact on battery life.

VoIP in general experiences full packet loss, meaing if a single bit flips the entire packet is dropped. For radio links like Bluetooth it's possible to deal with some bit flips without throwing the entire packat away.

Until 1.5 Opus PLC was in my opinion it's biggest weakness, compared to other speech codecs like G.711 or G.722. A high compression ratio causes bit flips to be much more destructive.

As for making moeny, Bluetooth codecs have no license fees.


> For radio links like Bluetooth it's possible to deal with some bit flips without throwing the entire packat away.

Opus was intentionally designed so that the most important bits are in the front of the packet, which can be better protected by your modulation scheme (or simple FEC on the first few bits). See slide 46 of https://people.xiph.org/~tterribe/pubs/lca2009/celt.pdf#page... for some early results on the position-dependence of quality loss due to bit errors.

It is obviously never going to be as robust as G.711, but it is not hopeless, either.


You can check out Google's version which I assume is bundled in Android: https://github.com/google/liblc3


I'm glad this was rejected. The author consideres violations of PEP8 to be broken code that doesn't generate exceptions.

Lots of people treat PEP8 as the word of god, but apparently have not bothered to read it. "Many projects have their own coding style guidelines. In the event of any conflicts, such project-specific guides take precedence for that project."


> I'm glad this was rejected.

It’s an April Fools joke from forever ago…


If this tool was trained on open source code, what license does the generated code have? At least with Codepilot people were able to generate verbatim GPL code with typos and everything. More importantly, I wonder if companies behind these type of tools offer legal or financial protections in case GPL code sneaks in and leads to expensive law suits.


I mean people are also trained on GPL code and I bet you can find a ton of functions copies from GPL projects in million other projects.

But as long as these are tiny parts of codebase (which will most probably be the case), I doubt anything can be done with that. No one will go to court because of a few generic functions.


No they weren't able to generate the same existing code, both because that code is not included anywhere in the model, and because Copilot (not "Codepilot") has safeguards against this kind of situation, should it arise in the highly unlikely situation that a snippet is repeated thousands of times across thousands of repositories.

I've gotta let you know that people copy code snippets from all sorts of codebases with little regard for licenses anyway, because they're toothless in 99% of cases, AI or not. It's a nice illusion that anyone respects licenses, but it's just not true.


That's incorrect. CoPilot steals verbatim. Examples: https://justoutsourcing.blogspot.com/2022/03/gpts-plagiarism...


I've spent hours looking over code before delivering to FAANG. Our company had put a clause into the contract that our code was free of any GPL'd code. It happened before and it was discoved. The whole thing was a very expensive excersice. I'm aware that many small startups, 90% of which go bust anyways, just ignore licenses but that doesn't work when you play with the big boys.


If you look at licensed code, then write new code, do you also bring in those licenses?

It's been proved in court that AI does not infringe on copyright or licenses since it generates things from an understanding of the whole, instead of directly stealing, just like the human brain does.


That is going to need a source. All I see in these AI data gathering exercises is that if the industry isn’t a well established litigious one, the companies will happily suck in all the data, license be damned. Code and art both fall under this. But when it comes to music which is heavily litigated, suddenly the only content a company like stable AI will use is open and voluntary because in that case they worry about “overfitting and legal issues”. (Refer Harmon ai)

Hypocrisy dressing up as progress in the machine learning field has been one of the most embarrassing scenes in software engineering recently. The genie may be out of the bottle but the fact is that a bunch of software engineers with a “move fast ethics later” attitude are the ones who let it out and they shouldn’t get to shrug it off for free.


>It's been proved in court that AI does not infringe on copyright or licenses since it generates things from an understanding of the whole, instead of directly stealing, just like the human brain does.

Do you have a source for that?

This SF Conservancy article[0] says that's not true:

>Consider GitHub’s claim that “training ML systems on public data is fair use”. We have not found any case of note — at least in the USA — that truly contemplates that question.

The first major court case I know about is the class-action case Matthew Butterick is trying to build.[1]

[0] https://sfconservancy.org/blog/2022/feb/03/github-copilot-co...

[1] https://githubcopilotinvestigation.com/


astonishingly enough every sentence in this post is untrue. There's been no court case on any of the models in question here. They don't work like human brains, nor understand anything they output. Even if they did of course that output would still be subject to licenses, given that human code is subject to them, which is why those licenses exist in the first place.

If you ever plan to steal someone's code and justify it with "my brain is able to learn, therefore copyright doesn't exist" I warn you right now this will not fly.


US Court rulings do not automatically apply worldwide, and not everything it would apply to exists within its jurisdiction.


> If you look at licensed code, then write new code, do you also bring in those licenses?

If the “new” code is close enough to be considered a derived work then you will need a license.


> If the “new” code is close enough to be considered a derived work then you will need a license.

And how is that determined... in court at trial? By an unbiased 3rd party competent enough to understand both codebases?


Same for all programmers out there. Copilot will need to be careful, and like with everyone else, they'll learn.


> Could Throat mics be the ultimate in high quality audio for cheap?

How can those microphones even capture fricatives or plosives in decent quality? Most of the vocal tract lies behind the throat. I looked for some examlpes and they all sound terrible.

If you want good and cheap audio, get an off-brand USB large diaphragm condenser mic, for example this:

https://www.thomann.de/gb/the_tbone_sc_450_usb_b_stock.htm


A commonly used way to create an discrete sine wave efficiently is to use an IIR oscilator. If it needs to be long running, use a real sin to correct for rounding errors every N samples. If you're using something like Q30 fixed point that correction doesn't have to happen that often.

https://dspguru.com/dsp/tricks/sine_tone_generator/


The IIR oscillators quickly deterioritate, though. A more accurate way is to do something CORDIC-inspired: Start with a 2D point in [1,0]. Every iteration, you rotate it by (2pi f T) radians (where f is the frequency you want, and T is 1/48000 or whatever). This requires you to have precomputed sin and cos of those values for the rotation matrix, but that can be done once, up-front, as long as you don't need to change the frequency. If you do this for every sample, x will trace out your cos() and y will trace out your sin().

Accuracy errors are easy to account for in this scheme; just renormalize the vector so that the length is 1. It's cheaper and less code than doing a sin().

By the way, for linear interpolation (if you wish to keep the table), usually x + (y-x)*t is faster than x*(1-t) + y*t.


It's more than a skin. Microsoft put some effort into optimization, like using a segmented heap allocator.


Meteorology is a sub-branch of physics.


> Every wired headphones I've had started to break down after a year or so, including ones with a detachable cable.

To counter your anecdotal evidence, let me offer mine.

My AKGs are almost 20 years old now. I only had to relpace the plug once which was a cheap repair I could do myself.


I use the $7 to $10 Panasonic HT21 for many years at a time, for the past 15 years.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: