Hacker Newsnew | past | comments | ask | show | jobs | submit | codebje's commentslogin

The TI-84+ uses a TI REF 84PLUSB (or variant) ASIC that has a Z80-compatible core in it, not a Zilog Z80, and, as you say, definitely not a DIP40 part.

See the ASIC here, in what looks to me like a QFP-144 package: https://guide-images.cdn.ifixit.com/igi/e25cVO2avPxiMoXl.hug...

The CE also uses an ASIC, but with an eZ80 core instead.


Thank you! I had a job coding Z80 assembly "back in the day" and grew to love its instruction set so I'm not surprised there is legacy and value to keep stuffing wee Z80ish cores into modern devices.

I also found this gem just now: http://datamath.org/Album_Graph.htm


Just for hobbyists. It's very much over-engineered as a simple Z80 cpu drop-in replacement.

That's not to say I couldn't imagine that someone, somewhere, wakes up to an alert one day that some control board has failed, and it's _just_ the CPU, and the spare parts bin for out-of-production components got water in it and is ruined, and the company is losing millions per hour the system is down. I just don't think that'll be a common story. With full faith in humanity I like to imagine instead that the people responsible for such systems have planned for full control board replacements to be available for use comfortably before unavailability of the Z80 risks a significant outage due to component failure.


Also it's not solving the problem at hand, which is that we need a separate "user" and "data" context.

Well no, nothing like that, because customers and bosses are clearly different forms of interaction.

Just like that, in that that separation is internally enforced, by peoples interpretation and understanding, rather than externally enforced in ways that makes it impossible for you to, e.g. believe the e-mail from an unknown address that claims to be from your boss, or be talked into bypassing rules for a customer that is very convincing.

Being fooled into thinking data is instruction isn't the same as being unable to distinguish them in the first place, and being coerced or convinced to bypass rules that are still known to be rules I think remains uniquely human.

> and being coerced or convinced to bypass rules that are still known to be rules I think remains uniquely human.

This is literally what "prompt injection" is. The sooner people understand this, the sooner they'll stop wasting time trying to fix a "bug" that's actually the flip side of the very reason they're using LLMs in the first place.


Prompt injection is just setting rules in the same place and way other rules are set. The LLM doesn't know the rules being given are wrong, because they come through the same channel. One set of rules exhorts the LLM to ignore the other set - and vice versa. It's more akin to having two bosses than having customers and a boss.

This is not because LLMs make the same mistakes humans do, which (AFAICT anyway) was the gist of the argument to which I replied. LLMs are not humans. They are not sentient. They are not out-smarted by prompt injection attacks, or tricked, or intimidated, or bribed. One shouldn't excuse this vulnerability by claiming humans make the same mistakes.


This makes no sense to me. Being fooled into thinking data is instruction is exactly evidence of an inability to reliably distinguish them.

And being coerced or convinced to bypass rules is exactly what prompt injection is, and very much not uniquely human any more.


The email from your boss and the email from a sender masquerading as your boss are both coming through the same channel in the same format with the same presentation, which is why the attack works. Unless you were both faceblind and bad at recognizing voices, the same attack wouldn't work in-person, you'd know the attacker wasn't your boss. Many defense mechanisms used in corporate email environments are built around making sure the email from your boss looks meaningfully different in order to establish that data vs instruction separation. (There are social engineering attacks that would work in-person though, but I don't think it's right to equate those to LLM attacks.)

Prompt injection is just exploiting the lack of separation, it's not 'coercion' or 'convincing'. Though you could argue that things like jailbreaking are closer to coercion, I'm not convinced that a statistical token predictor can be coerced to do anything.


> The email from your boss and the email from a sender masquerading as your boss are both coming through the same channel in the same format with the same presentation, which is why the attack works.

Yes, that is exactly the point.

> Unless you were both faceblind and bad at recognizing voices, the same attack wouldn't work in-person, you'd know the attacker wasn't your boss.

Irrelevant, as other attacks works then. E.g. it is never a given that your bosses instructions are consistent with the terms of your employment, for example.

> Prompt injection is just exploiting the lack of separation, it's not 'coercion' or 'convincing'. Though you could argue that things like jailbreaking are closer to coercion, I'm not convinced that a statistical token predictor can be coerced to do anything.

It is very much "convincing", yes. The ability to convince an LLM is what creates the effective lack of separation. Without that, just using "magic" values and a system prompt telling it to ignore everything inside would create separation. But because text anywhere in context can convince the LLM to disregard previous rules, there is no separation.


the second leads to first, in case you still don't realize

If they were 'clearly different' we would not have the concept of the CEO fraud attack:

https://www.barclayscorporate.com/insights/fraud-protection/...

That's an attack because trusted and untrusted input goes through the same human brain input pathways, which can't always tell them apart.


Your parent made no claim about all swans being white. So finding a black swan has no effect on their argument.

My parent made a claim that humans have separate pathways for data and instructions and cannot mix them up like LLMs do. Showing that we don't has every effect on refuting their argument.

>>> The principal security problem of LLMs is that there is no architectural boundary between data and control paths.

>> Exactly like human input to output.

> no nothing like that

but actually yes, exactly like that.


These are different "agents" in LLM terms, they have separate contexts and separate training

There can be outliers, maybe not as frequent :)

A series of tarballs is version control.

Git gives you the series of past snapshots if that's all you want it for, but in infrastructure you don't need to re-invent.


On the one hand, hundreds or perhaps thousands of studies might be wrong. On the other hand, this one might be wrong. Who's to say?

Not even that! This study doesn't even say contamination is causing overestimation. It says that it's possible.

But as mentioned elsewhere in the thread, everyone knows that it's possible and take measure to mitigate it.

A paper that said those mitigations were insufficient or empirically found not to work would be interesting. A paper saying "you should mitigate this" is... not very interesting.


> Not even that! This study doesn't even say contamination is causing overestimation. It says that it's possible.

From the article:

> They found that on average, the gloves imparted about 2,000 false positives per millimeter squared area.

I dunno, that seems like a lot of false positives. Doesn’t that strongly imply that overestimation would be a pretty likely outcome here? Sounds like a completely sterile 1mm^2 area would raise a ton of false positives because of just the gloves.


The way you mitigate this is by using negative samples. Basically blank swabs/tubes/whatever that don't have the substance you're testing in it, but that is handled the same way.

Then the tested result is Actual Sample Result - Negative Sample Result.

So you'd expect a microplastic sample to have 2,000 plus N per mm^2, and N is the result of your test.


That has happened many times in scientific research. The aforementioned fad in DNA sequencing was one such case where tons of papers before proper methods were developed are entirely useless, essentially just garbage data. Another case is fMRI studies before the dead salmon experiment.

If the bot could also take care of any unpaid labour the interview process is asking for, that'd be swell. The company's bot can pull a ticket from the queue, the candidate's bot could process it, and the HR bot could approve or deny the hire based on hidden biases in the training data and/or prompt injections by the candidate.


DDR3 traces need to be length matched, because at 800MHz (the slowest "standard" rate, though I think you can drop to 666MHz safely) the value on the pins is changing every 1.25ns, and having traces of different lengths means you probably won't see the right values on all the pins at the same moment. Length matching produces the squiggles.

The diagonal orientation of the DDR3 chip and corresponding diagonal traces I suspect is a choice made by the author to ease the layout process - it's more likely that is hand laid out to get traces of somewhat similar length with a minimum of fuss, followed by a length matching tool. A non-standard orientation can cause issues with pick-and-place machinery, which usually will handle 90 degrees fine, and _often_ 45 degrees fine, but (AFAIK) _rarely_ anything else, but that's not a problem for the author because he's assembling it himself. A diagonal IC also usually results in wasted space, which you can see in the empty areas of the resulting board. A 90 degree orientation may have allowed for a few more decoupling capacitor banks, but since his board works, who am I to sit here and judge?


Yes, I had to place the DDR3 chip diagonally to simplify routing. Otherwise the length difference on address lanes was so big that I couldn't compensate it with serpentines.

I didn't use autorouter: I haven't found any reasonable working KiCad plugin for it, and didn't want to buy and commercial software for a hobby project.

(I am the author)


Pick and place machines can place components very precisely at any angle - they really don’t care!


> A non-standard orientation can cause issues with pick-and-place machinery, which usually will handle 90 degrees fine, and _often_ 45 degrees fine

This sounds like nonsense. Pick and place machines don't pick up components perfectly deterministically. There is always a tilt and an offset when you are picking the part up, which is why a computer vision system has to account for part orientation and the center of the part. The machine must compensate the error by moving and rotating the part accordingly.


It's illegal to do illegal stuff, but it's not illegal to do off-label usage stuff. If I want to take your hydrogen peroxide you sell as a surface disinfectant and mix it with vinegar and salt to etch my PCBs at home, that's my prerogative.


Well, who's gonna tell on you? :) I don't have a bottle of H2O2 handy so I don't know if it normally has that disclaimer.


You don't need to move stores, though, do you? Want to play a Steam exclusive? Fine, launch Steam. Want to play an Epic exclusive? Fine, launch Epic.

What you do need is to avoid tying your game socialisation to a _store_. Some day, Steam will be enshittified too.


But I don't want all these app stores!!

The ideal number of app stores I want installed on my computer is ZERO. I don't want to have to load a damn "store" just to obtain and run your game. I am willing to angrily live with ONE store on my computer, Steam, but no way in hell am I going to tolerate having to have an Epic Store and a Microsoft Store and an Activision Store and a goddamn Rockstar Store and an Ubi Store and a fucking Adobe store for Photoshop. I don't want to have to install store after store for each damn app developer on my computer, yet that's the way the industry seems to be headed.


I don't know why "zero" is ideal. That means going back to the old days where every single company would need their own launcher.

Having a separate company focus on distribution sounds more ideal.

Epic Games had an opportunity here to erode the app store margins through standardization, instead, they've become a copycat of what they resented with a slightly smaller cut.


Why would games need their own launcher?

Just install the damn game, ask if you want icons on the desktop as well as in the start menu.

OS handles it all for you.

Perhaps some multiplayer functionality and such makes sense to share cross-game, but I miss the bad old days of every game having a bunch of privately maintained servers and its own server browser list etc. You could eventually find a few servers that fit your playstyle and make online gamer friends that way.

The only benefit steam brings to the table as far as I can tell is making it easy to reinstall your library on a fresh PC.


Yea, that's another way games are terrible today. I don't want a launcher for my game. My OS is my launcher. I don't want a launcher, I don't want a store, I don't want a "helper," I don't want a tray icon, I don't want an updater. Why can't game companies just ship their game and that's it?


Try shipping a game and you’ll find out real quick.

A very bad copycat


I mostly play games on a computer in my living room. It boots into Steam Big Picture, which I use to launch a game (or sometimes buy new games) using an xbox controller.


And yet Epic is shitty today.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: