Hacker Newsnew | past | comments | ask | show | jobs | submit | dTal's commentslogin

Interesting comment with worthwhile content, but the writing style strongly smells of ChatGPT, and the phrase "For someone in your position" is incongruous (who is being addressed?). Did you use it, and if so, would you mind sharing the prompt?

They're a founder of a startup doing this kind of thing, realistically they probably copied blurb per-prepared marketing blurb or something they sent to someone else.

“Terrestrial develops a robotic approach to earthen construction”. person you replied to claims to be a founder of this company, according to LinkedIn

Other “tells” in the comment are the subscript “2” and the full spelling of the chemical. Also 3 emdashes


yeah, so the turn in EU towards renewable energy is driving fwd the business of earthen construction. our core (validated) product is printing earthen acoustic barriers at ~4-5m3/hr. panels from loam are a great alternative to gypsum; due to the hygrothermic characteristics of earth the moisture content is stabilised (constant in a ~50-55% bandwidth) which is a massive advantage in view of traditional materials. and fully circular. I'm a developer of pythonocc and tesseract-nanobind, and take pleasure in augmenting my thinking with a dash of ai.

I use subscript 2s and emdashes. Your "tells" seem to be based on the assumption that humans will not bother to learn key combinations.

There are 4 paragraphs and three of them have emdashes. You may use emdashes, but you use them orders of magnitude less frequently than current AI models.

Money is power, and nothing but.

But power is not only money.

LLM output has its quirks, but human output can be much quirkier. To me, the most obvious tell of AI is a lack of quirks.

I would expect there not to be a meaningful number of "unfamiliar words and idioms" to a professional translator.

I would expect that many professional translators have native fluency in only one language.

Disc = round part visible

Disk = round part hidden or no round part

Have I got it!?


I think their primary difference is disc = optical, disk = magnetic. That’s what they mention first.

All of that “in the UK”.

Looking at the store, they’re using “SSD Storage” for SSD.


The British spelling was used by Philips when they launched the Compact Disc with Sony.

Disk was used by American companies inventing hard disks, floppy disks etc.

British software often used "disc" for both, e.g. RISC OS on Acorn/ARM/Raspberry Pi [1].

[1] https://arcwiki.org.uk/index.php/RISC_OS_3 (see screenshot)


Apple uses “disk” when referring to SSD storage. They still use disc when referring to a CD or DVD

Source: the language used in MacOS Tahoe


SSD could stand for "SSD Storage Device".

Bring back recursive acronyms!


SSD, of course, stands for Solid State Dis[c,k]...

Solid State Drive, usually, but when it comes to language anything goes.

A drive is a motor or other similar device, one that is driven or worked.

But there are no moving parts in an SSD.


Hence solid state.

Disck.

Your ultra-reductionism does not not constitute understanding. "Math happens and that somehow leads to a conversational AI" is true, but it is not useful. You cannot use it to answer questions like "how should I prompt the model to achieve <x>". There are many layers of abstraction within the network - important, predictive abstractions - which you have no concept of. It is as useful as asking a particle physicist why your girlfriend left you, because she is made of atoms.

Incidentally, your description of LLMs also describes all software, ever. It's just math, man! That doesn't make you an expert kernel hacker.


It sounds like you're looking for the field of psychology. And like the field of psychology, any predictive abstraction around systems this complicated will be tenuous, statistical, and full of bad science.

You may never get a scientific answer to "how should I prompt the model to achieve <x>", just like you may never get a capital-S scientific answer to "how should I convince people to do X". What would it even mean to "understand people" like this?

You demand too much.


>In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question.

No it isn't. Type a question into a base model, one that hasn't been finetuned into being a chatbot, and the predicted continuation will be all sorts of crap, but very often another question, or a framing that positions the original question as rhetorical in order to make a point. Untuned raw language models have an incredible flair for suddenly and unexpectedly shifting context - it might output an answer to your question, then suddenly decide that the entire thing is part of some internet flamewar and generate a completely contradictory answer, complete with insults to the first poster. It's less like talking with an AI and more like opening random pages in Borge's infinite library.

To get a base language model to behave reliably like a chatbot, you have to explicitly feed it "a transcript of a dialogue between a human and an AI chatbot", and allow the language model to imagine what a helpful chatbot would say (and take control during the human parts). The fact that this works - that a mere statistical predictive language model bootstraps into a whole persona merely because you declared that it should, in natural English - well, I still see that as a pretty "magic" trick.


>No it isn't. Type a question into a base model, one that hasn't been finetuned into being a chatbot, and the predicted continuation will be all sorts of crap, but very often another question, or a framing that positions the original question as rhetorical in order to make a point.....

To be fair, only if you pose this question singularly with no proceeding context. If you want the raw LLM to answer your question(s) reliably then you can have the context prepended with other question-answer pairs and it works fine. A raw LLM is already capable of being a chatbot or anything else with the right preceding context.


Right, but that was my point - statistically, answers do not follow questions without some establishing context, and as such, while LLMs are "simply" next word predictors, the chatbots aren't - they are Hofstaderian strange loops that we will into being. The simpler you think language models are, the more that should seem "magic".

They're not simple though. You can understand, in a reductionist sense, the basic principles of how transformers perform function approximation; but that does not grant an intuitive sense of the nature of the specific function they have been trained to approximate, or how they have achieved this approximation. We have little insight into what abstract concepts each of the many billions of parameters map on to. Progress on introspecting these networks has been a lot slower than trial-and-error improvements. So there is a very real sense in which we have no idea how LLMs work, and they are literally "magic black boxes".

No matter how you slice it - if "magic" is a word which can ever be applied to software, LLM chatbots are sure as shit magic.


Don't forget an entire new category of computing, AI, which is teetering on the edge of requiring processors from one manufacturer, which in turn requires gigabytes of closed-source runtime. Today, you can do functionally more with a computer with an nVidia chip, driven by their binary blobs, than with any other hardware - even though the application software is usually Free. It's a dangerous situation. We are so used to general purpose compute substrate being "free software friendly", but this amounts to a new type of CPU that categorically requires a closed-source OS to be useful.

Is this supposed to be a difficult choice?

I have the opposite reaction to your historical energy figures - energy consumption is clearly not as important to technological progress as we imagine. If there's only a 4x difference between the Founding Fathers and B29s carrying nukes, why should there be orders of magnitude between today and [insert scifi]?

No, the real question is, where the hell is this exponential increase coming from? I think anyone would agree that, along most obvious metrics, the difference between 1800 and 1945 is much more pronounced than between 1945 and 2020. Yet the first was a 4x increase, and the second, over 7x. And in a third the time, too.

I'd like to see it broken down by country. I'll bet a lot of the increase actually comes from very poor countries turning into rich ones. In the west, our at-home per-capita energy use has not changed much from 1945 - may even have declined for some demographics (1945 houses were poorly insulated). But China lifted some hundreds of millions of peasant farmers into a middle class existence. That's got to be a bigger factor than the fact that I own a laptop and my grandpa didn't.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: