Hacker Newsnew | past | comments | ask | show | jobs | submit | klodolph's commentslogin

There are also USB-C 2 cables.

USB-C is the bane of my existence. Everything looks the same, but certain cables won't charge certain devices for seemingly no reason, and other cables won't transfer data, and there's no easy way (AFAIK) to tell the difference

> certain cables won't charge certain devices

not sure how you can make a cable that doesnt connect power from end to end. I can see if it doesnt charge as fast as others if it doesnt have the bits required for higher current support. and if a device requires >5V to charge, thats on the device not the cable.

> other cables won't transfer data

again, not sure you can make a cable that doesnt connect the USB2 pair from end to end. but if device doesnt use USB2 and requires something else without mentioning it then that again seems to be on the device not the cable.


FWIW the PS5 controller is super particular about what charger you use due to Sony being dumb, but the deciding factor there is the charger, not the cable.

Source is the eternal benevolent champion of usbc compliance testing, Benson Leung: https://www.reddit.com/r/UsbCHardware/comments/tdduha/commen...

(and also my personal experience, but Benson explains why)


It's probably a problem with my devices. I've never seen these problems with more expensive devices, but my cheap bluetooth speakers will only charge with certain cables.

I also have cheap cables that don't seem able to do data transfer. Guessing it's not actually following the USB-C spec.


Are your bluetooth speakers connected over a C-to-C cable or is there any legacy USB in the mix (type-A and/or microusb)? The reason I ask is legacy USB expected 5 volts to be supplied by default, whereas in type-C you have to specifically request any current. So some C-to-A / A-to-C adapters/cables include the resistors to request the current whereas others do not, leading to legacy USB devices not getting power through some adapters/cables.

USB c cables aren’t merely wires and connectors but have some electronics embedded on them.

I think the reasons are a little more boring, that it was a combination of different reasons that contributed to Stadia’s failure.

I agree with the assessment about eshell. I use eshell for one thing only—quick terminal sessions in the same directory as the file I'm editing.

2009, please, sorry. 1993 is fun and all, I can go relive the dreams of Microsoft Encarta, but 2009 has Mac OS 10.6, gigabit ethernet everywhere, and USB.

Pretty much. I thimk we forget about the troubles of the world pre USB. Also by 2009 OS's were very stable but hadnt taken on too much bloat yet.

Yeah SCSI not being hot swappable was quite a chore.

What are you doing with TTL logic in 2026, out of curiosity?

(I’m not saying it’s not used, but the only thing I’d use TTL for is building old circuits out of the Forrest Mims books.)


Reasonable question and hopefully an interesting answer...

The simple lack of reasons to use TTL logic in 2026 was exactly why I didn't know what the deal was. It'd never come up, but I'd see it referenced.

I'm self-taught and in defiance of the people who insist that LLMs turn our brains to passive mush, the more things I learn the more things I have to be curious about.

LLMs remove the gatekeeping around asking "simple" questions that tend to make EEs roll their eyes. I didn't know, so I asked and now I know!


What was the answer?

I’m just curious at this point about what the quality of the answer is, just because you made a point about LLM use not turning your brain into mush.

I’ve not really used LLMs to answer questions, since it hasn’t gotten me the answers I wanted, but maybe I’m just set in my ways.


I'm actually pretty thrilled that you asked, because I think that this chat is an extremely solid example of LLM usage in the EE domain, and I'm happy to share.

https://chatgpt.com/share/69a184b0-7c38-8012-b36d-c3f2cefc13...

I definitely led some questions to try and squeeze new-to-me perspectives out of it; for example, there could be tricks that make the active high variant more useful in some scenarios.

I think it does a good job of surfacing adjacent questions you might not realize you were eager to ask, as well as showing how it's able to critically evaluate real-world part suitability. I do find that ChatGPT in particular does better with a screengrab of the most likely parts vs a URL to the search engine.


I see the chat, but it looks like you’re not actually considering using TTL anywhere, and ChatGPT isn’t giving any explanations about TTL?

> I would definitely like to understand HCT vs HC (CMOS vs TTL) much better than I do, which currently isn't at all.

I think what ChatGPT should have explained at the beginning is that both HCT and HC are CMOS logic families, it’s just that HCT is designed to interface with TTL (receive TTL signal levels as inputs). The outputs are the same (CMOS outputs are rail to rail, which you can feed into TTL just fine).

Actual TTL logic, like the 7400 series and the variations (LS is one of the more popular variations), uses NPN transistors as inputs and to pull output signals low. It uses resistors to pull the signals high. The result is a lot of current consumption and asymmetrical output signals… maybe a good question to ask ChatGPT is “why does TTL use so much current?” CMOS, by comparison, uses a tiny amount of current except when it is switching.

I would probably choose AHC first as a logic family these days. It’s a slightly better version of HC, but it’s not so fast that it will cause problems.

Just peeking at one of the recommendations in the chat, if you search for 74HCT125 or 74AHC125 on Mouser, you’ll see that the AHC has more options available and more parts in stock. That’s a sign that it’s probably a more popular logic family than HCT, which is something I consider when buying (more popular = better availability).


Thanks so much for the additional context. You've given me more to dig into.

What I would like to know from you is:

1. On the whole, is the information you see it present more or less coherent and useful? Is it better to have this information than not have it at all?

2. Where does this land in terms of your expectations? Did anything surprise you?

It's clear from your reply that you know what you're talking about, while I'm still clawing my way up from nothing... so it makes sense that you have fewer things that you need to ask about.

I've bootstrapped my entire EE skillset over the past 2-3 years, largely with the help of LLMs to interrogate. It's helped me design and build my first product. I'm confident that without these tools, it's not a question of how long it would have taken so much as the truth: it would have died on the vine.

Follow-up: https://chatgpt.com/share/69a184b0-7c38-8012-b36d-c3f2cefc13...

I asked it about the AHC family equivalent and it recommended against using it, suggesting either AHCT or sticking with HCT. For what it's worth, the reference board that I'm tracing uses an HCT, so the LLM isn't wrong.

Note that at the time I'm writing this, I have an extremely fuzzy understanding of the difference between these three... but I'm working through it.


I’m mostly just curious about how people use LLMs to learn. I don’t know what your goals are, and even if your goals were the same as mine, I don’t know how LLMs stack up against the way I learned (mostly from books). At least, not long-term. I’m not that good at electronics, I’m just a hobbyist that went through Forrest Mims mini-notebooks and later Horowitz and Hill.

What I like about information from humans is that humans are always trying to figure out how to say things that are relevant and informative. By “relevant”, I mean that we try to avoid saying things that don’t help you. By “informative”, I mean that we try to include information that you want to know, even if you didn’t specifically ask for it.

Picking on the chat for a moment—when you started out with the question, my first thought was, “This person is specifically asking about HC versus HCT, but maybe they want a broader overview of logic families, and maybe they want to understand which logic family to pick for their hobby project.” That’s an example where I think ChatGPT could have identified something that you wanted to know, but didn’t. (It wasn’t as informative as it could have been.)

Then there’s some times that ChatGPT gave you information of dubious relevance.

> Important: On the HCT125, the enable is active-LOW.

I don’t think that’s contextually important. It’s like saying, “Important: On the Honda Civic, the gas tank is on the left.” That’s contextually important when you’re at the gas station, but not when you’re buying a car.

I’m not sure why the LLM is recommending the TTL-compatible chips. IMO, the right thing to do here is probably to run everything at 3.3V, unless you have something that specifically needs 5V. When everything is at 3.3V, you don’t have to think about level shifting and you can just pick a very boring logic family like AHC. But I don’t know what you’re building. Likewise, I would lean towards using normal CMOS logic levels, unless I had a specific reason to choose TTL-compatible. The regular CMOS versions have better noise margin, because the threshold is in the optimum place—right in the middle.


I can actually clear a lot of that up. ChatGPT has accumulated a significant amount of ambient knowledge about what I'm working on and how I typically progress through asking questions, so the path isn't as blue sky as it appears.

For example, I'm working with a specialty SPCO switch IC that runs at 5V. There's never been and likely never will be a 3.3V version of the AS16M1. Being able to drive the switch (which functions like a shift register) from my ESP32 is top-of-mind.

The HCT125 being active low is directly responding to my question about why to choose it vs the 126 version; since the board I'm studying (which again, it has seen) uses 125s, it's reasonable to wonder why they'd choose one over the other.

Overall, the tone of chats on EE topics tend to be task-focused with permission to go on interesting side quests. I'm trying to get stuff done with room for relevant exploration along the way.

Does that change anything for you?


Sure, that makes sense. Use the 74AHCT125/6 or 74HCT125/6 to drive a 5V chip from a µc with 3.3V IO.

I think you're in the minority of people that are using LLMs for one of the best uses - for augmenting your own understanding and intelligence. Of course you have to triangulate and triple check what they say, but that's a good habit to get into anyway. Many of my teachers would repeat tribal myths all the same.

Hm, I think you kinda know what people are like by seeing what they do when they’re under no stress and feel like they are free from consequences. When they have total power in a situation. The façade drops because it’s not necessary.

If someone is in an environment where they have to do XYZ or die, their choice to do XYZ might not reflect their personality, but the environment where they have to do XYZ or die.

But if you were watching them, was there really no freedom from consequences? At least there was the risk of you thinking less of them.

I think that really cruel people want you to know when they can act with impunity, it's part of the appeal to some. The Anthropic people don't seem like that sort, at least. But plenty of horrible people have still not been that sort.


> But if you were watching them, was there really no freedom from consequences?

Ah, so I think you may have done a little hop and a jump over a critical, load-bearing term which is “feel like”. You get to observe people who feel like there are no consequences. Their feelings may or may not be accurate.

You can sometimes see people who treat service workers, servants, or subordinates poorly because they feel like it’s permitted and free from consequence. You can also sometimes see people reveal things about themselves when playing games. It’s kind of a cliché that people find out that they’re transgender at the D&D table, and it happens because it’s a “consequence-free way” to act out a different gender role.

Or we can talk about that magic ring that makes you invisible. You know, the ring of Gyges, or that of Sauron. People can’t actually become invisible, but you can sometimes catch them in a situation where they think they can do something wrong and not get caught.


Free from consequence. In other words, free of any stakes. Zero stress low stakes environments enable larping.

If Claude chooses GitHub actions that often, well, that is DAMNING. I wasn’t prepared for this but jeez, GitHub actions are kind of a tarpit of just awful shitty code that people copy from other repos, which then pulls and runs the latest copy of some code in some random repository you’ve never heard of. Ugh.

So… this has been happening for a long time now. The baseline set of tools is a lot better than it used to be. Back in 2010, jQuery was the divine ruler of JSlandia. Nowadays, you would probably just throw your jQuery in the woodchipper and replace it with raw, unfinished, quartersawn JS straight from the mill.

I also used to have these massive sets of packages pieced together with RequireJS or Rollup or WebPack or whatever. Now it’s unnecessary.

(I wouldn’t dare swap out a JWT implementation with something Claude wrote, though.)


Sorry by, JWT, I meant the middleware that integrates the crypto nto my web server (pretty sure even Claude doesn't attempt to do hand-rolled crypto, thankfully).

That express middleware library has a ton of config options that were quite the headache to understand, and I realized that it's basically a couple hundred line skeleton that I spent more time customizing than it'd have taken from scratch.

As for old JS vs new JS - I have worked more in the enterprise world before, working with stuff like ASP.NET in that era.

Let me tell you a story - way back when I needed to change how a particular bit of configuration was read at startup in the ASP.NET server. I was astonished to find that the config logic (which was essentially just binding data from env vars and json to objects), was thousands upon thousands of lines of code, with a deep inheritance chain and probably an UML diagram that could've covered a football field.

I am super glad that that kind of software engineering lost out to simple and sensible solutions in the JS ecosystem. I am less glad that that simplicity is obscured and the first instinct of many JS devs is to reach for a package instead of understanding how the underlying system works, and how to extend it.

Which tbf is not their fault - even if simplicity exists, people still assume (I certainly did) that that JWT middleware library was a substantial piece of engineering, when it wasn't.


I smell wood reading this

Third option, which is that there was a partially automated system with a person in the loop.

I remember this in the news, but I had to look stuff up on Wikipedia to refresh my memory:

https://en.wikipedia.org/wiki/Terra_(blockchain)

> The Anchor Protocol was a lending and borrowing protocol built on the Terra chain. Investors who deposited UST in the Anchor Protocol were receiving a 19.45% yield paid out from Terra's reserves.

What the fuck?


It was a system built on sleight of hand, with a cover story just complex enough that it worked for a while.

How do you create a stablecoin? There are two ways in general. One is to have it backed 1:1 with a bank account somewhere that contains the actual currency it represents. In theory you then allow people to freely exchange back and forth between tokens and dollars. Tether kinda/sort works this way in theory.

The other way is to play games with algorithms and try to use the market against itself to create stability. Terra (UST) attempted to do this by running a complex scheme that leveraged a floating backing token, Luna, and a smart contract which allowed you to exchange 1 UST for $1 worth of newly created Luna. If UST starts to lose its peg and become worth less than a dollar, people buy it to exchange for $1 worth of Luna, sell the Luna for a profit, so arbitrage sorts the price out. If it becomes worth more than a dollar, you buy Luna, burn it to convert to new UST, then sell that for a profit, adding sell pressure and diluting the supply.

Even with the best will in the world systems like this could best be described as meta-stable, i.e. it'll smooth out minor perturbations but there are limits.

One major problem is how do you get Luna to be worth anything though? Well you offer inducements like a ridiculous interest rate, high enough that anyone outside the cryptocurrency bubble would immediately see a red flag, and which then has to be subsidised by ... creating more tokens.

Eventually the limit was discovered, Luna dumped massively and the whole illusion collapsed.


A ponzi crypto scheme? It is the only kind.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: