Hacker News new | past | comments | ask | show | jobs | submit | highlights login
If you run across a great HN comment (or comment tree), please tell us at hn@ycombinator.com so we can add it here.

Correct. You can get the same power with half the voltage by doubling the current.

The trouble is the wires. A given wire gauge is limited in its ability to conduct current, not power. So if you double to the current, you'll need to have roughly twice as much copper in your walls, in your fuse panel, in your appliance, etc.

Additionally, losses due to heat are proportional to the current. If you double the current and halve the voltage, you'll lose twice as much power by heading the wires. For just a house, this isn't a lot, but it's not zero.

This is why US households still have 240V available. If you have a large appliance that requires a lot of power, like an oven, water heater, dryer, L2 EV charger, etc, you really want to use more voltage and less current. Otherwise the wires start getting ridiculous.

This is not to say that higher voltage is just necessarily better. Most of the EU and the UK in particular has plugs/outlets which are substantially more robust and difficult to accidentally connect the line voltage to a human. Lots of people talk about how much safer, for instance, UK plugs/outlets are than US plugs. If you look at the numbers though, the UK has more total deaths per year to electrocution than the US, despite the fact the US is substantially more populous. This isn't because of the plugs or the outlets, US plugs really are bad and UK plugs really are good. But overall, the US has less deaths because we have lower voltage; it's not as easy to kill someone with 120V as 240V.

So there's a tradeoff. There is no best one size fits all solution.


Physicist here. The superconductivity in layered graphene is indeed surprisingly strange, but this popular article may not do it justice. Here are some older articles on the same topic that may be more informative:

https://www.quantamagazine.org/how-twisted-graphene-became-t...,

https://www.quantamagazine.org/a-new-twist-reveals-supercond....

Let me briefly say why some reasons this topic is so interesting. Electrons in a crystal always have both potential energy (electrical repulsion) and kinetic energy (set by the atomic positions and orbitals). The standard BCS theory of superconductivity only works well when the potential energy is negligible, but the most interesting superconductors --- probably including all high temperature ones like the cuprates --- are in the regime where potential energy is much stronger than kinetic energy. These are often in the class of "unconventional" superconductors where vanilla BCS theory does not apply. The superconductors in layered (and usually twisted) graphene lie in that same regime of large potential/kinetic energy. However, their 2d nature makes many types of measurements (and some types of theories) much easier. These materials might be the best candidate available to study to get a handle on how unconventional superconductivity "really works". (Besides superconductors, these same materials have oodles of other interesting phases of matter, many of which are quite exotic.)


Tangent, but interesting: how do you get fair samples from a biased coin? 1. You take a string of biased samples like 001011100101 2. you split it in pairs 00 10 11 10 01 01 3. you keep only pairs with a zero and a one in them 10 10 01 01 4. You assign 0 and 1 to them, e.g. 1 1 0 0, this is a fair sampling from an unbiased coin

Why does it work? Because even if p(0) ≠ p(1), p(01) = p(10).


On my off hours, I’ve been working through volumes 4A and 4B. They are really wonderful, I highly recommend them. They’re not practical for the vast majority of programmers, but the way he designs and writes about algorithms is remarkable, truly unique. The Dancing Links implementation in 4B in particular (updated significantly from his famous paper) is like a work of art, it’s such an intricate and gorgeous data structure, and blazing fast as well. Man still got it, in his 80s.

My grandpa was a fan of Nim, and at restaurants we’d play with sugar packets while waiting for food. It’s a great game to entertain kids. He also invented Dr. Nim which some gray beards may be familiar with. Turing Tumble is an evolution of Dr. Nim/DigiComp II.

Hi! I'm the dev here! I built this on a whim at after seeing someone ask for it on twitter. It was 12:30 at night but I couldn't pass down the opportunity to build it.

The code is very simple, there's no backend at all actually, I believe because wikipedia's api is very permissive and you can just make the requests in the frontend. So you just simply request random articles, get some snippets, and the image attached!

I used Claude and cursor do 90% of the heavy lifting, so I am positive there's plenty of room for optimizations. But right now as it stands, it's quite fun to play with, even without anything very sophisticated.

Here is the source code. https://github.com/IsaacGemal/wikitok


This is weird, but I make my living by digitizing old flipbooks into MP4 files!

Knowing quite a bit about the world of Costa Rican grocery stores -- many of which started using "Hiper-" (spanish for hyper-) as a prefix in their names a few decades ago, to one-up markets merely named "Super-" [1] -- I'm actually quite suprised they didn't just rename themselves "Hiper Mario" and save the legal fees. But bravo to them for winning against all odds.

[1] https://ticotimes.net/2004/04/02/hipermas-supermarket-aims-f...


I've studied the Burrows-Wheeler Transform, I understand the transformation, I've re-implemented it countless times for kicks, I see how it improves compressability, but for the life of me the intuition of _why_ it works has never really clicked.

It's a fantastic bit of algorithmic magic that will always impress me to see it.


When I was a lad, I spent some time in front of a mirror trying to teach myself to move my eyebrows independently, like Spock. I eventually succeeded, but in the process I also learned how to move my ears. One downside is that these ear muscles began to involuntarily try to help. For instance, if I am looking down while wearing glasses, my ears contract to grip the glasses so they don't fall off, and after a while these seldom used muscles ache from the effort.

Hold up! Omg, can someone who’s done physics chime in please… whenever I’ve looked at GUT etc, I’ve always seen U(n), SU(n), but never knew what they were - are they what’s referred to in this article? Is that just the Unitary Group and Special Unitary Group??! All that time I thought it was all impenetrable but it’s just algebra?

Omg wow... the theoretical physics I’m talking about is just quaternions and Lie Algebra isn’t it? Oh… dont tell me Quantum Spin just called Spin because it’s a Spinor rather than something actually metaphorically spinning?!

Please chime in if you know what I’m talking about and can confirm this or shoot it down.


I work on the Near Earth Object Surveyor space telescope (NEO Surveyor) writing simulation code which predicts which objects we will see. This one has drummed up a bit of interest due to its (relatively) high chance of impact. I actually spent quite a bit of time yesterday digging through archive images trying trying to see if it was spotted on some previous times it came by the Earth (no luck unfortunately). Since we saw it so briefly, our knowledge of its orbit is not that great, and running the clock back to 2016 for example ended up with a large chunk of sky where it could have been, and it is quite small. We will almost certainly see it again with NEO Surveyor years before its 2032 close encounter. I have not run a simulation for it, but I would not be surprised if LSST (a large ground telescope survey which is currently coming online) to catch it around the same time NEO Surveyor does.

Our knowledge of the diameter of this object is a bit fuzzy, because of surface reflectivity, small shiny things can appear as bright as dark large things. This is one of the motivations of making the NEO Surveyor an IR telescope, since IR we see black body emission off of the objects, which is mostly size dependent, and only weakly albedo dependent.

There is an even tinier chance that if it misses the Earth in 2032, it could hit the moon. I haven't run the numbers precisely for that one, but it impacted a few times in some monte-carlo simulations.

If anyone is interested in orbit dynamics, I have open sourced some of the engine we are using for observation predictions: https://github.com/Caltech-IPAC/kete

It is relatively high precision, though JPL Horizons has more accurate gravitational models and is far better suited for impact studies. My code is primarily for predicting when objects will be seen in telescopes.


We actually had a company tab at that ice cream shop - anyone who had a Pebble on ate for free :)

The description of DeepSeek reminds me of my experience in networking in the late 80s - early 90s.

Back then a really big motivator for Asynchronous Transfer Mode (ATM) and fiber-to-the-home was the promise of video on demand, which was a huge market in comparison to the Internet of the day. Just about all the work in this area ignored the potential of advanced video coding algorithms, and assumed that broadcast TV-quality video would require about 50x more bandwidth than today's SD Netflix videos, and 6x more than 4K.

What made video on the Internet possible wasn't a faster Internet, although the 10-20x increase every decade certainly helped - it was smarter algorithms that used orders of magnitude less bandwidth. In the case of AI, GPUs keep getting faster, but it's going to take a hell of a long time to achieve a 10x improvement in performance per cm^2 of silicon. Vastly improved training/inference algorithms may or may not be possible (DeepSeek seems to indicate the answer is "may") but there's no physical limit preventing them from being discovered, and the disruption when someone invents a new algorithm can be nearly immediate.


My dad was one these ARVN soldiers. In the final days of the war he and his drill sergeant stole a helicopter as Saigon fell and flew west, expecting to keep fighting. They wound up in a refugee camp in Thailand and eventually made it to the US. He wouldn't see his family again until Clinton normalized relations with Vietnam 20 years later.

In those final moments, soldiers who knew how to fly took whatever aircraft they could get their hands on, (Chinooks, Hueys, Cessnas, etc.) and flew aimlessly, hoping to run into friendly forces along the way before their fuel ran out.


The Weierstrass function is cool but the undisputed champion of calculus counterexamples has to be the Dirichlet function[1]

f(x) = 1 if x is rational, 0 otherwise.

It is defined over all real numbers but continuous nowhere. Also if you take the Dirichlet function and multiply it by x so you get

g(x) = x if x is rational, 0 otherwise

…then you have something that is continuous at exactly one place (0) and nowhere else, which also is pretty spectacular.

[1] https://mathworld.wolfram.com/DirichletFunction.html


Super impressive, and awesome to see that you were able to use Framework Laptop hinges. Let me know if you need more. We have a ton of remaining 3.3kg ones!

My family’s phone number when I was a child was both a palindrome and a prime: 7984897.

My parents had had the number for two decades without noticing it was a palindrome. I still remember my father’s delight when he got off a phone call with a friend: “Doug just said, ‘Hey, I dialed your number backwards and it was still you who answered.’ I never noticed that before!”

A few years later, around 1973, one of the other math nerds at my high school liked to factor seven-digit phone numbers by hand just for fun. I was then taking a programming class—Fortran IV, punch cards—and one of my self-initiated projects was to write a prime factoring program. I got the program to work, and, inspired by my friend, I started factoring various phone numbers. Imagine my own delight when I learned that my home phone number was not only a palindrome but also prime.

Postscript: The reason we hadn’t noticed that 7984897 was a palindrome was because, until around 1970, phone numbers in our area were written and spoken with the telephone exchange name [1]. When I was small, I learned our phone number as “SYcamore 8 4 8 9 7” or “S Y 8 4 8 9 7.” We thought of the first two digits as letters, not as numbers.

Second postscript: I lost contact with that prime-factoring friend after high school. I see now that she went on to earn a Ph.D. in mathematics, specialized in number theory, and had an Erdős number of 1. In 1985, she published a paper titled “How Often Is the Number of Divisors of n a Divisor of n?” [2]. She died two years ago, at the age of sixty-six [3].

[1] https://en.wikipedia.org/wiki/Telephone_exchange_names

[2] https://www.sciencedirect.com/science/article/pii/0022314X85...

[3] https://www.legacy.com/us/obituaries/legacyremembers/claudia...


If you were around in the 80's and 90's you might have already memorized the prime 8675309 (https://en.wikipedia.org/wiki/867-5309/Jenny). It's also a twin prime, so you can add 2 to get another prime (8675311).

Reminds me of the time I turned myself into a Van de Graff generator at work.

I was a theater projectionist, back when you had 20 minute reels you had to constantly change, while babysitting two high-voltage, water-cooled, carbon arc projectors. Sometimes the film would break and you’d have to splice it. So when the theater got a print in, you had to count and log the number of splices for each reel, then the next theater would do the same and retire the print when it got too spliced up (plus, sometimes if it was the last night of a run, some lazy projectionists would splice it in place with masking tape and then you’d have to fix it). Sometimes you had to splice in new trailers or remove inappropriate ones as well.

Anyway, you counted splices by rapidly winding through the reel with a benchtop motor with a speed control belted to a takeup reel while the source spun freely. Then, while letting the film slide between your fingers, counting each “bump” you felt as it wound through. I was told to ground myself by touching the metal switch plate of the speed control knob with my other hand. One night I forgot and let go until my hair started rising. I’d gone through most of the reel at a very high speed and acquired its charge.

I reached for the switch plate and shot an 8-10” arcing discharge between the plate and my fingers.

Lesson learned, I held the switch plate from then on.


This is about an explicit argument of type "Context". I'm not a Go user, and at first I thought it was about something else: an implicit context variable that allows you to pass stuff deep down the call stack, without intermediate functions knowing about it.

React has "Context", SwiftUI has "@Environment", Emacs LISP has dynamic scope (so I heard). C# has AsyncLocal, Node.JS AsyncLocalStorage.

This is one of those ideas that at first seem really wrong (isn't it just a global variable in disguise?) but is actually very useful and can result in cleaner code with less globals or less superfluous function arguments. Imagine passing a logger like this, or feature flags. Or imagine setting "debug = True" before a function, and it applies to everything down the call stack (but not in other threads/async contexts).

Implicit context (properly integrated into the type system) is something I would consider in any new language. And it might also be a solution here (altough I would say such a "clever" and unusual feature would be against the goals of Go).


> Originally, if you typed an unknown command, it would just say "this is not a git command".

Back in the 70s, Hal Finney was writing a BASIC interpreter to fit in 2K of ROM on the Mattel Intellivision system. This meant every byte was precious. To report a syntax error, he shortened the message for all errors to:

    EH?
I still laugh about that. He was quite proud of it.

I was there at the time and until the end.

That cartoon meme with the dog sitting with a cup of coffee or whatever and telling himself "This is fine", while everything is on fire, is probably the best way to describe how things felt at nokia back then.


Daubechies wavelets are such incredibly strange and beautiful objects, particularly for how deviant they are compared to everything you are typically familiar with when you are starting your signal processing journey… if it’s possible for a mathematical construction to be punk, then it would be the Daubechies wavelets.

Co-founder of Quickwit here. Seeing our acquisition by Datadog on the HN front page feels like a truly full-circle moment.

HN has been interwoven with Quickwit's journey from the very beginning. Looking back, it's striking to see how our progress is literally chronicled in our HN front-page posts:

- Searching the web for under $1000/month [0]

- A Rust optimization story [1]

- Decentralized cluster membership in Rust [2]

- Filtering a vector with SIMD instructions (AVX-2 and AVX-512) [3]

- Efficient indexing with Quickwit Rust actor framework [4]

- A compressed indexable bitset [5]

- Show HN: Quickwit – OSS Alternative to Elasticsearch, Splunk, Datadog [6]

- Quickwit 0.8: Indexing and Search at Petabyte Scale [7]

- Tantivy – full-text search engine library inspired by Apache Lucene [8]

- Binance built a 100PB log service with Quickwit [9]

- Datadog acquires Quickwit [10]

Each of these front-page appearances was a milestone for us. We put our hearts into writing those engineering articles, hoping to contribute something valuable to our community.

I'm convinced HN played a key role in Quickwit's success by providing visibility, positive feedback, critical comments, and leads that contacted us directly after a front-page post. This community's authenticity and passion for technology are unparalleled. And we're incredibly grateful for this.

Thank you all :)

[0] https://news.ycombinator.com/item?id=27074481

[1] https://news.ycombinator.com/item?id=28955461

[2] https://news.ycombinator.com/item?id=31190586

[3] https://news.ycombinator.com/item?id=32674040

[4] https://news.ycombinator.com/item?id=35785421

[5] https://news.ycombinator.com/item?id=36519467

[6] https://news.ycombinator.com/item?id=38902042

[7] https://news.ycombinator.com/item?id=39756367

[8] https://news.ycombinator.com/item?id=40492834

[9] https://news.ycombinator.com/item?id=40935701

[10] https://news.ycombinator.com/item?id=42648043


I worked for Peter Kirstein for many years - he always had wonderful stories to tell.

In the article Peter talks about the temporary import license for the original ARPAnet equipment. The delayed VAT and duty bill for this gear prevented anyone else taking over the UK internet in the early days because the bill would have then become due. But he didn't mention that eventually if the original ARPAnet equipment was ever scrapped, the bill would also become due.

When I was first at UCL in the mid 1980s until well into the 90s, all that equipment was stored disused in the mens toilets in the basement. Eventually Peter decided someone had to do something about it, but he couldn't afford the budget to ship all this gear back to the US. Peter always seemed to delight in finding loopholes, so he pulled some strings. Peter was always very well connected - UCL even ran the .int and nato.int domains for a long time. So, at some point someone from UCL drove a truck full of obsolete ARPAnet gear to some American Air Force base in East Anglia that was technically US territory. Someone from the US air force gave them a receipt, and the gear was officially exported. And there it was left, in the US Air Force garbage. Shame it didn't end up in a museum, but that would have required paying the VAT bill.


That March 1977 map always brings back a flood of memories to this old-timer.

Happy nights spent hacking in the Harvard graduate computer center next to the PDP-1/PDP-10 (Harv-1, Harv-10), getting calls on the IMP phone in the middle of the night from the BBN network operations asking me to reboot it manually as it had gotten wedged...

And, next to me, Bill Gates writing his first assembler/linker/simulator for the Altair 8080... (I tried talking him out of this microcomputer distraction -- we have the whole world of mainframes at our fingertips! -- without success.)

(Edit:) We also would play the game of telnet-till-you-die, going from machine to machine around the world (no passwords on guest accounts in the early days), until the connection died somewhere along the way.

Plus, once the hackers came along, Geoff Steckel (systems guy on the PDP-10) wrote a little logger to record all incoming guests keystrokes on an old teletype, so we could watch them attempting to hack the system.


Last Saturday, myself and two other firefighters managed to find a woman lost in a maze of 40+ miles of trails. Her hip had dislocated, she could not move. The temperatures were in the upper 20s (F) (-3C or so) with serious windchill amidst 35mph/56kph wind gusts. It was extremely dark. We stayed with her and tried to keep her as warm as possible until a UTV arrived to extricate her. I was home by 01:00, after 4hrs outside under the stars.

In the end we didn't really do much at all, but it felt like one of the most meaningful nights of my entire life.


In 2005, my paper on breaking RSA by observing a single private-key operation from a different hyperthread sharing the same L1 cache -- literally the first publication of a cryptographic attack exploiting shared caches -- was rejected from the cryptology preprint archive on the grounds that "it was about CPU architecture, not cryptography". Rejection from journals is like rejection from VCs -- it happens all the time and often not for any good reason.

(That paper has now been cited 971 times according to Google Scholar, despite never appearing in a journal.)


The original answer to "why does FastMail use their own hardware" is that when I started the company in 1999 there weren't many options. I actually originally used a single bare metal server at Rackspace, which at that time was a small scrappy startup. IIRC it cost $70/month. There weren't really practical VPS or SaaS alternatives back then for what I needed.

Rob (the author of the linked article) joined a few months later, and when we got too big for our Rackspace server, we looked at the cost of buying something and doing colo instead. The biggest challenge was trying to convince a vendor to let me use my Australian credit card but ship the server to a US address (we decided to use NYI for colo, based in NY). It turned out that IBM were able to do that, so they got our business. Both IBM and NYI were great for handling remote hands and hardware issues, which obviously we couldn't do from Australia.

A little bit later Bron joined us, and he automated absolutely everything, so that we were able to just have NYI plug in a new machine and it would set itself up from scratch. This all just used regular Linux capabilities and simple open source tools, plus of course a whole lot of Perl.

As the fortunes of AWS et al rose and rose and rose, I kept looking at their pricing at features and kept wondering what I was missing. They seemed orders of magnitude more expensive for something that was more complex to manage and would have locked us into a specific vendor's tooling. But everyone seemed to be flocking to them.

To this day I still use bare metal servers for pretty much everything, and still love having the ability to use simple universally-applicable tools like plain Linux, Bash, Perl, Python, and SSH, to handle everything cheaply and reliably.

I've been doing some planning over the last couple of years on teaching a course on how to do all this, although I was worried that folks are too locked in to SaaS stuff -- but perhaps things are changing and there might be interest in that after all?...


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: