Hacker Newsnew | past | comments | ask | show | jobs | submit | teleforce's commentslogin

Alphabet is such an underrated invention. It's probably higher in significance compared to the invention of wheel. It's the original "bicycle of the mind". For example, Korea pivoting from Chinese characters to its own alphabet or Hangul is very well documented including the positive effects it has in the much improved Korean literacy and civilization after the conversion. Fun facts anyone can learn Hangul alphabet in a single day if they wanted to but the same cannot be said to Chinese characters. If your mother tongue is Korean (e.g Korean American) that only just started learning, it only take one day turnover from illiterate to literate.

Scripts being the main driver of literacy is a pet peeve of mine. It's not the script, it's the schooling system. The high rates of literacy in modern states are just a result of the school system - Japan has a high literacy rate, for example, and their writing system is either the worst in the world or close to it.

That said, the characters are a whole boatload of unnecessary extra effort, and as a student of the two languages, the artificial illiteracy created by kanji, where I often just can't read words I've known for years, is simply maddening. Not having to wrestle with characters does free up a lot of time for both native and foreign students alike.


>their writing system is either the worst in the world or close to it

Yes, it's probably the worst since even Microsoft until now still struggle to provide proper search solution for Japanese names in their Windows OS due to their multitude of writing systems.

By sheer wills of course you can make everything hard feasible but that does not means it's efficient and effective. I consider Japanese as a unique country with extraordinary people that can collectively overcome adversity, that's include a non intuitive and difficult writing systems.


  If your mother tongue is Korean (e.g Korean American) that only just started learning, it only take one day turnover from illiterate to literate.
Heritage speakers (of any language, not just Korean) often have limited vocabulary and limited exposure to complex grammar. Being able to sound out words wouldn't be enough to allow a heritage speaker able to fluently read a newspaper.

How many Korean-Americans know the Korean words for things like 'legislature', 'inflation', or 'geopolitical tensions'?


>In April this year, China installed more solar power than Australia has in all its history. In one month.

>This website is for humans, and LLMs are not welcome here.

Ultimately LLM is for human, unless you watched too much Terminator movies on repeat and took them to your heart.

Joking aside, there is next gen web standards initiative namely BRAID that will make web to be more human and machine friendly with a synchronous web of state [1],[2].

[1] A Synchronous Web of State:

https://braid.org/meeting-107

[2] Most RESTful APIs aren't really RESTful (564 comments):

https://news.ycombinator.com/item?id=44507076


>LLMs are not "compelled" by the training algorithms to learn symbolic logic.

I think "compell" is such a unique human trait that machine will never replicate to the T.

The article did mention specifically about this very issue:

"And of course people can be like that, too - eg much better at the big O notation and complexity analysis in interviews than on the job. But I guarantee you that if you put a gun to their head or offer them a million dollar bonus for getting it right, they will do well enough on the job, too. And with 200 billion thrown at LLM hardware last year, the thing can't complain that it wasn't incentivized to perform."

If it's not already evident that in itself LLM is a limited stochastic AI tool by definition and its distant cousins are the deterministic logic, optimization and constraint programming [1],[2],[3]. Perhaps one of the two breakthroughs that the author was predicting will be in this deterministic domain in order to assist LLM, and it will be the hybrid approach rather than purely LLM.

[1] Logic, Optimization, and Constraint Programming: A Fruitful Collaboration - John Hooker - CMU (2023) [video]:

https://www.youtube.com/live/TknN8fCQvRk

[2] "We Really Don't Know How to Compute!" - Gerald Sussman - MIT (2011) [video]:

https://youtube.com/watch?v=HB5TrK7A4pI

[3] Google OR-Tools:

https://developers.google.com/optimization

[4] MiniZinc:

https://www.minizinc.org/


And yet there are two camps on the matter. Experts like Hinton disagree, others agree.

> The overall disk usage for trixie is 403,854,660 kB (403 GB), and is made up of 1,463,291,186 lines of code.

This makes Debian Trixie about 32 times larger than Windows XP with approximately 45 millions lines of code, arguably the best Windows OS ever.

Debian Trixie is released about 24 years after Windows XP.


Sure, but XP came with minimal amounts of bundled software, that's every package in the debian repo.

> Windows XP with approximately 45 millions lines of code, arguably the best Windows OS ever.

Naah. Early(ish) Win7, or maybe even late Vista.


Kudos OpenAI on releasing their open models, is now moving in the direction if only based on their prefix "Open" name alone.

For those who're wondering what are the real benefits, it's the main fact that you can run your LLM locally is awesome without resorting to expensive and inefficient cloud based superpower.

Run the model against your very own documents with RAG, it can provide excellent context engineering for your LLM prompts with reliable citations and much less hallucinations especially for self learning purposes [1].

Beyond Intel - NVIDIA desktop/laptop duopoly 96 GB of (V)RAM MacBook with UMA and the new high end AMD Strix laptop with similar setup of 96 GB of (V)RAM from the 128 GB RAM [2]. The osd-gpt-120b is made for this particular setup.

[1] AI-driven chat assistant for ECE 120 course at UIUC:

https://uiuc.chat/ece120/chat

[2] HP ZBook Ultra G1a Review: Strix Halo Power in a Sleek Workstation:

https://www.bestlaptop.deals/articles/hp-zbook-ultra-g1a-rev...


Fun facts, Fukushima Daiichi Nuclear Power Plant was built beyond the warning limit of the tsunami stones.

If those people that setup the tsunami stones are still alive during the incident they will have a kahuna of "I told you" moment.


>Onagawa was… 60 kilometers closer than Fukushima Daiichi [to the epicenter] and the difference in seismic intensity at the two plants was negligible. Furthermore, the tsunami was bigger at Onagawa, reaching a height of 14.3 meters, compared with 13.1 meters at Fukushima Daiichi. The difference in outcomes at the two plants reveals the root cause of Fukushima Daiichi’s failures: the utility’s corporate “safety culture.”

>Before beginning construction, Tohoku Electric conducted surveys and simulations aimed at predicting tsunami levels. The initial predictions showed that tsunamis in the region historically had an average height of about 3 meters. Based on that, the company constructed its plant at 14.7 meters above sea level, almost five times that height.

>Tepco, on the other hand, to make it easier to transport equipment and to save construction costs, in 1967 removed 25 meters from the 35-meter natural seawall of the Daiichi plant site and built the reactor buildings at a much lower elevation of 10 meters.

https://thebulletin.org/2014/03/onagawa-the-japanese-nuclear...


This is why so many people are against nuclear. It may be theoretically possible to create safe nuclear plants, but capitalism is badly equipped to create them.

Maybe you mean democratic societies are badly equipped to regulate capitalist economies? There are zero successful capitalist economies that lack powerful governmental regulatory control.

Fukushima 1F was a failure of governmental regulation.

It's really important to understand that, because otherwise you inescapably frame the argument wrongly. Capitalism isn't the problem, regulatory weakness is the problem. No capitalist society can survive lack of effective regulation.

(Fukushima was bad, and an example of regulatory failure, but Japan's overall effective regulatory influence over its corporations — and similarly, its mafia — is the secret sauce that has made it an economic overperformer. China can also do that — because it is a brutal dictatorship. America can't do that — and things aren't looking good. UK retains the power to do it, but it's Keystone Kops. EU can't do it, either, for reasons I can't understand at distance.

But creating safe nuclear power plants is fundamentally the same problem as creating safe elevators. In a capitalist society, it's 100% about regulatory power and competence, and nothing else.


Partlially agree, but one needs to try really, really hard to intentionally overlook the whole "influence of corporate giants on government" issue before the government regulation argument carries any practical weight.

So if anything this weakens the Fukushima argument: in a country with excellent regulatory tradition and little evidence of regulatory capture, this is less likely to be about bad or lacking government regulations.


It's also initially/primarily a failure of amoral capitalism.

I can't tell if this is a tongue in cheek joke or just blatant ignorance. The biggest in size and death nuclear powerplant disaster was Chernobyl under communism and it wasn't there first or last.

Just because authoritarian communism is also not equipped to build nuclear reactors does not mean capitalism is.

If A, then B does not imply "B, therefore A". This is called "affirming the consequent".


The reactors were also largely fine except for grid connections and the hurricane-resistant backup generator in the basement. It was told ad nauseum at the time that there could have been couples of those on the roofs and the reactor could have just survived.

Imagine how much cheaper a few pumps on the roof would've been :)

Similar to mitigating climate change effects 30 years ago. Now it's way too late.


"Beyond" is completely ambiguous in this case. Do you mean above or below?

Well obviously they mean below

Not obvious to me

They famously got inundated with tsunami water. It's pretty reasonable to assume they were below the line of tsunami high water marks.

How is that reasonable to assume? You can have a tsunami that is higher than the previous tsunamis, hence, exceeding previous water marks.

What's more likely, worst tsunami ever, beyond previous safety stones? Or company shortcuts safety and falls over because they failed to account for predictable circumstances?

Given the latter happens constantly, and the former is once in generations upon generations, I think it's safe to assume the problem is human and the tsunami was within historical ranges.

Especially since other reactors were hit by the same tsunami and were fine.


Unless they mean beyond the reach of the flood waters.

I got very confused too. After reading a few times I interpreted it as a typo.

...beyond the (possible) reach...(of whatever(waves in this case))

Do you have a citation for this? The most Gemini could say is: "While research has not identified a specific tsunami stone located at the Fukushima Daiichi site that was directly violated, the spirit of these ancient warnings was undeniably ignored." (https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%...)

I don't know if there are "Tsunami stones" in the area but the nuclear power plant is built at sea level [1] so would most probably be below them.

The issue is the height of the seawalls that was not sufficient (and perhaps historical warnings, if any, were ignored):

"The subsequent destructive tsunami with waves of up to 14 metres (46 ft) that over-topped the station, which had seawalls" [1]

Edit: Regarding historical warnings:

"The 2011 Tōhoku earthquake occurred in exactly the same area as the 869 earthquake, fulfilling the earlier prediction and causing major flooding in the Sendai area." [2]

[1] https://en.wikipedia.org/wiki/Fukushima_Daiichi_Nuclear_Powe...

[2] https://en.wikipedia.org/wiki/869_J%C5%8Dgan_earthquake


IIRC the issue was the emergency diesel generators being flooded, preventing them from powering the emergency cooling pumps, resulting in the meltdowns from residual heat in the reactor cores and spent fuel pools.

Various construction changes could have prevented this from happening:

- the whole power plan being built higher up or further inland

-> this would likely be quite a bit more expensive due to land availability & cooling water management when not on sea level & next to the sea

- the emergency generators being built higher up or protected from a tsunami by other means (watertight bunker ?)

-> of course this requires the plan cooling systems & the necessary wiring itself working after surviving a massive earthquake & being flooded

An inland power plant - while quite wasteful in an island country - would be protected from tsunamis & certainly doable. On the other hand, I do wonder how would high concrete cooling towers handle strong earthquakes ? A lot of small cooling towers might have ti be used, like in Palo Verde nuclear generating station in Arizona.

Otherwise a bizzare case could still happen, with a meltdown possibly happening due to your cooling towers falling over & their cooling capacity being lost.


Another option is designing fail safe reactors. CANDU reactors designs are over 60 years old now and were built fail safe so that if outside power to the core is cut off the system would safe itself by dropping control rods which are held up by electromagnets into the core.

A reactor scram isn't necessarily enough -- you still have decay heat to worry about. In the case of Fukushima, the fission chain reaction was stopped but without cooling pumps the decay heat was still too much.

It seems like you should build a water reservoir at a higher elevation than the core and then apply a similar principle where valves regulate the water stream, but if the valves lose power they fail open. The reservoir can be built so that there is always enough water to cool the core.

For light water reactors this basically just amounts to a large pool up a nearby hill or in a water tower.


That is easier said than done - modern reactors are in the 1000 MW+ electrical power range, which means about 3x as much heat needs to be generated to get this much electricity - say 3000 MW.

Even when you correctly shut down the chain reaction in the reactor (which correctly happened in the affected Fukushima powerplant) a significant amount of heat will still be generated in the reactor core for days or even weeks - even if it was just 1% of the 3 GW thermal load, that is still 30 MW. It will be the most intense immediately after shutdown and will then trail off slowly.

The mechanism for this is inherent to the fission reactors - you split heavier elements into lighter ones, releasing energy. But some of the new lighter elements are unstable and eventually split to something else, before finally splitting into a stable element. These decay chains can take quite some time to reach stable state for a lot of the core & will still release radiation (and a lot of heat) for the time being.

(There are IIRC also some processes where neutrons get captured by elements in the core & those get transmutated to other, possibly unstable elements, that then decay. That could also result add up the the decay heat in the core.)

And if you are not able to remove the heat quickly enough - the fuel elements do not care, they will just continue to heat up until they melt. :P

I am a bit skeptical you could have a big enough reservoir on hand to handle this in a passive manner. What on the other hand I could image could work (and what some more modern designs include IIRC) is a passive system with natural circulation. Eq. you basically have a special dry cooling tower through which you pass water from the core, it heats up air which caries the heat up, sucking in more air (chimney effect). The colder water is more dense, so it sinks down, sucking in more warm water. Old hot water heating worked like this in houses, without pumps.

If you build it just right, it should be able to handle the decay heat load without any moving parts or electricity until the core is safe.


Yea, it seems like you could design a cooling loop that runs just off the latent heat. Im sure somebody in reactor design has sketched it out.

Some napkin math based upon heat capacity of water and assuming a 20 degree celsius input and 80 degree celsius output and 30MW heat results in about 120 liters per second of water flow needed. That is about 10 million liters of water per day, or about 4 olympic sized swimming pools. I don’t know how long you need to keep cooling for, but 10 million liters of water per day seems not insane and within the realm of possibility.

If you allow the water to turn into superheated steam you can extract much larger amounts of heat off the reactor as well.


there are reactor designs that work that way, but most civilian power plants are pressurized water reactors. it is important that the water stays pressurized or you get a chernobyl

Fukushima was based on a Westinghouse BWR (Boiling Water Reactor) design, so pressurization was not that much of an issue - if enough of sufficiently cold water was provided, there would be no meltdown.

Dlang does not has rank polymorphism and it handle array just fine with crazy speed in both compilation and execution.

It can be faster than Fortran based library that is still being used by Matlab, Rust and Julia [1].

It will be interesting to compare Mojo moblas BLAS library with GLAS library performance in D.

[1] Numeric age for D: Mir GLAS is faster than OpenBLAS and Eigen (2016):

http://blog.mir.dlang.io/glas/benchmark/openblas/2016/09/23/...


If I understand correctly what is meant by rank polymorphism, it is not just about speed, but about ergonomics.

Taking examples I am familiar w/, it is key that you can add a scalar 1 to a rank 2 array in numpy/matllab without having to explicitly create a rank 2 array of 1s, and numpy somehow generalizes that (broadcasting). I understand other array programming languages have more advanced/generic versions of broadcasting, but I am not super familiar w/ them


Unfortunely, while the community is great, what it doesn't have is a direction, thus keeps pivoting every couple of years, and with that lost the adoption opportunity window it had a decade ago.

Form is temporary, class is permanent.

> If you vibe coding the errors are caught earlier so you can vibe code them away before it blows up at run time

You can say that again.

I was looking into the many comments for this particular comment and you did hit the nail on the head.

The irony is that it took the entire GenAI -> LLM -> vibe coding cycle to settle the argument that typed language is better for human coding and software engineering.


Sure, but in my experience the advantage is less than one would imagine. LLMs are really good at pattern matching and as long as they have the API and the relevant source code in their context they wont make many/any of the errors that humans are prone to.

>who is using iceberg with hundreds of concurrent committers, especially at the scale mentioned in the article (10k rows per second)? Using iceberg or any table format over object storage would be insane in that case

You can achieve 100M database inserts per second with D4M and Accumulo more than a decade ago back in 2014, and object storage is not necessary for that exercise.

Someone need to come up with lakehouse systems based on D4M, it's a long overdue.

D4M is also based on sound mathematics not unlike the venerable SQL [2].

[1] Achieving 100M database inserts per second using Apache Accumulo and D4M (2017 - 46 comments):

https://news.ycombinator.com/item?id=13465141

[2] Mathematics of Big Data: Spreadsheets, Databases, Matrices, and Graphs:

https://mitpress.mit.edu/9780262038393/mathematics-of-big-da...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: