Hacker News new | past | comments | ask | show | jobs | submit | sitkack's comments login

Inventing words (neologism) with LLMs is a fun past time, or giving it descriptions and having it define the concept in various languages.

Any qualities you ascribe to an LLM is part of its RLHF, ask to get irritated or lazy and it will simulate those qualities. They are high dimensional text simulators. They can and do simulate anything.


Employees can't get access to encrypted messages.

But they can look the other way about flaws in their Electron client.


Or any client.

This should be illegal.

1 day after they were emailed.

Also, "Coinbase had detected the breach independently in previous months", aren't they required to disclose this? In the EU they are: Every EU institution must do this within 72 hours of becoming aware of the breach, where feasible


Fun article on David Ackley https://news.unm.edu/news/24-nobel-prize-in-physics-cited-gr...

Do check out his T2 Tile Project.


The key takeaways are that there are lots of people involved with making these breakthroughs.

The value of grad students is often overlooked, they contribute so much and then later on advance the research even more.

Why does America look on research as a waste, when it has move everything so far?


Why do you say that America looks in research as a waste? We spend higher percentage of gdp on R&D than just about any other country in the world:

https://en.wikipedia.org/wiki/List_of_sovereign_states_by_re...


It's more accurate to say that businesspeople consider research a waste in our quarter-by-quarter investment climate, since it generally doesn't lead to immediate gains.

And our current leadership considers research a threat, since science rarely supports conspiracy theorists or historical revisionism.


More charitably, America's current government has an unusually large concentration of business people. Interestingly, they were elected as a vote for change by a population of non business people who were tired of the economic marginalization they suffered when their government consisted largely of non business people not once but twice. It will be interesting to see how this plays out.

It is AI, you can talk and have computers respond.

I would also recommend "R. G. Loeliger Threaded Interpretive Languages Their Design And Implementation" between these two books the whole beauty of Forth and their implementation should just click.

Forth isn't one of those languages that you _use_. You extend the language from the inside, so you need to know how your Forth is implemented. I'd say it is the only language where users of the language could all recreate the language.


> MicroPython's inline assembler now supports 32-bit RISC-V assembly code via the newly implemented @micropython.asm_rv32 decorator. This allows writing small snippets of RISC-V machine code that can be called directly from Python code. It is enabled on the rp2 port when the RP2350 is running in RISC-V mode.

Exciting!


You are just doubling down on protecting your argument.

I operate LLMs in many conversational modes where it does ask clarifying questions, probing questions, baseline determining questions.

It takes at most one sentence in the prompt to get them to act this way.


> It takes at most one sentence in the prompt to get them to act this way.

What is this one sentence you are using?

I am struggling to elicite clarification behavior form llms


What is your domain and what assumptions are they making that they should be asking you for? Have you tried multiple models?

"Any questions before you start coding?"

Could you share your prompt to get it to ask clarifying questions? I'm wondering if it would work in custom instructions.

It is domain dependent, you really need to play with it. Tell it you are doing pair thinking and either get it to ask questions about things it doesn't understand, or get it to ask you questions to get you to think better. Project the AI into a vantage point in the latent space and then get it to behave in the way that you want it to.

You can ask it to use the Socratic method, but then it is probing you, not its own understanding. Now have it use the socratic method on itself. You can tell it to have multiple simultaneous minds.

Play with deepseek in thinking and non-thinking mode, give it nebulous prompts and see if you can get it to ask for clarifications.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: