Hacker Newsnew | past | comments | ask | show | jobs | submit | PowerfulWizard's commentslogin

Very interesting. My dream is to have something like this, a KV-store, a blob store, and pubsub all behind the same interface.


"like this" meaning you want wire PG compatibility, so you'd do something like this?

  UPDATE kv SET value = 'alpha' WHERE key = 'beta.charlie';
  UPDATE s3 SET value = $b64$good luck$b64$ WHERE key = '/some/s3/path';
  LISTEN whatever;


I don't know what I want exactly but I'm thinking along the lines that SQL is already doing a lot so it would make the most sense to start with a database interface and augment from there, to try to build a system to handle all the common forms of durable storage used by applications.

The type of situation I'm thinking about is for example storing a blob in S3, storing metadata and a reference to the blob's path in a database row, sending a message into a queue to trigger some async processing, and updating a cache. It would be nice to be able to do this through a single API or service, and it would be really nice to do all this within some type of transaction abstraction that would allow all operations to pass or fail collectively, really really nice if the whole thing could be pay-as-you and scale horizontally-ish on shared infrastructure without managing nodes or slots or whatever.

I'm not a Postgres user so I don't know how far you can get currently and I should probably look into it in detail. Coordinating blob/ject storage, database, and pubsub operations is a pain point for me presently. I think that overall system design is going to prevent a database-type system from being a good idea for blob storage but I would still like to see someone try to put three systems in a trenchcoat and try to make it work behind one interface.


I was just trying to get a sense for where the line of demarcation was in your mind. PG has "foreign data wrappers"[1] that allow one to treat external ... whatever ... as if it were a table or procedure within PG. Just stupid powerful, IMHO. It is FDW-specific whether "transaction" means anything to the foreign system, so that may break your mental model but could still get you very close (e.g. BEGIN; UPDATE s3 SET ...; ROLLBACK; may not do anything sensible)

https://github.com/turbot/steampipe#steampipe-plugins and https://steampipe.io/docs/steampipe_postgres/overview may be relevant, although watch out for Steampipe's license

https://github.com/topics/foreign-data-wrapper and https://github.com/topics/fdw are some other examples

1: https://www.postgresql.org/docs/17/fdwhandler.html (although strictly speaking that page is for _authoring_ FDW, not a tl;dr of the concept)


This looks useful, I usually use guestfish to put files into the image before flashing but this could be a lot more flexible.


Here's what I use (as a bookmark):

    data:text/html,<body contenteditable style="line-height:1.5;font-size:20px;">
No save function obviously but this lets me open a new tab and dump some text.


Seeing the replies to your comment, I have to ask: Notepad++ persists your unsaved notes, has dark mode and themes, is fast and lightweight... why insist on forcing text-editor-like behavior on the browser? It feels like a solution in need of a problem.


(For myself) because 99% of my time is spent in an IDE or a browser, and there's less mental overhead for me to open a new tab and start typing than for me to open a new app and do so.


The IDE is literally a text editor. Why not hit file -> new file and write stuff in there?


For me it's the risk of littering in a project repo.

So I use Zim wiki instead: https://zim-wiki.org/


AFAIK all editors and IDEs I've ever used can open a random file in a new tab, even if it is outside of the repository.


Because "random human-language notes" are conceptually different from source code for me.


Sublime Text fits the “nameless notes” niche for me for similar reasons. It’s super speedy, has plenty of customizability, and has rock solid auto save+restore for unsaved text.


Can you restore a closed tab of unsaved text?


Not out of the box I believe, but I use a package from their directory to do just that.


>why insist on forcing text-editor-like behavior on the browser? It feels like a solution in need of a problem.

Because the browser is the operating system.

I might be only half joking.


I hear people say it a lot and I know what they mean but I just can't agree... to me an OS runs on hardware. (or virtualized hardware) Browsers run on an OS. If you have "boot to browser" the OS is still the kernel. Browsers are userspace.

It's like that saying "The difference between a boat and a ship is that a ship can carry a boat, but a boat can't carry a ship." And I know there is jslinux but at that point we're in a Turing tarpit where you can say that the Lua VM or wasm is "an OS" and the term is just a five-dollar word for "abstraction layer". Is a function call an OS? Come on.


Right? I use the gnome default editor for this. Also persists unsaved notes, its always available and has some few basic features that sometimes come in handy (regex match, etc)


> persists your unsaved notes

Except when you brainfart on the OS shutdown and choose the wrong answer.

But yes, I even do the culling every couple of months.


You should pick a text editor that doesn’t throw up a dialog when quitting then. I use CotEditor on macOS specifically for random notes, everything’s unsaved and some notes have survived dozens of reboots over a number of years.


> doesn’t throw up a dialog when quitting then

It doesn't! At least when I Alt+F4 it.

> everything’s unsaved and some notes have survived dozens of reboots

Yep, this is exactly how I use it.

But somehow that one time (note: it was on the shutdown) something went terribly wrong.


I wonder if your notes were borked out of session.xml, but the files were still available at AppData\Roaming\Notepad++\backup.

I've changed machines where the user profile was in a different location, copied my AppData, and replacing the old location in Notepad++'s session.xml was enough to restore my unsaved notes.


Nah.

Of course I tried everything (except looking in the shadow copy? Don't remember), but in the essence the shutdown triggered Save All workflow (somehow) and I responded with 'No'.

*weep*


you mean emacs/?


Nice! I bookmarked it and I'm gonna start using it, thank you.

For a quick and dirty save, you can press Ctrl+P to open the print window/dialog and select "Save as PDF", or you can press Ctrl+S and save as a single HTML file.

Edit: to make the text cursor focus automatically when the page loads, you can add the autofocus attribute to the body tag.


Following might work to save as well.

     data:text/html,<html contenteditable onload="document.body.innerHTML = localStorage['text']" oninput="localStorage['text'] = document.body.innerHTML" style="line-height:1.5;font-size:20px;">


While you can't save to localStorage as my sibling commenters have shown, greyface- down below in the thread posted a version that saves to the hash fragment of the URI. Saving to the (Data) URI has a benefit over localStorage of allowing you to save by bookmarking, which also enables you to save many notes, not just one.

I code-golfed greyface-'s code and made the text cursor autofocus on page load:

  data:text/html,<body contenteditable autofocus oninput="history.replaceState(0,0,'%23'+btoa(this.outerHTML))" onload="location.hash&& document.write(atob(location.hash.slice(1)))">#


Dug into this for a bit, sadly:

> Webstorage is tied to an origin. 'data:' URLs have unique origins in Blink (that is, they match no other origins, not even themselves). even if we decided that 'data:' URLs should be able to access localStorage, the data wouldn't be available next time you visited the URL, as the origins wouldn't match.


It will need a hostname or a page at least.

  Failed to read the 'localStorage' property from 'Window': Storage is disabled inside 'data:' URLs.


Along the same lines, I created a simple HTML site to interface with Japanese Translation tools: https://blog.frost.kiwi/just-a-text-box/


dark mode:

  data:text/html,<body contenteditable style="line-height:1.5;font-size:20px;color:lightgray;background-color:black">


SpaceX’s propulsive recovery testing began over ten years ago, meaning a development campaign starting today would be ten years behind SpaceX if they can progress at the same speed. It seems crazy to me that essentially no one is even trying to follow the path demonstrated by SpaceX.



It's true that Relativity Space Terran-R, Rocket Lab Neutron, and Blue Origin New Glenn are under development and are planning first stage re-use although none have flown. And also Stoke Space is developing a 2nd stage re-use solution which SpaceX hasn't solved yet. I think another American company could have a Falcon 9 competitor within 5 years if they really move fast. But they haven't yet reached the stage that SpaceX reached 10 years ago.


I think above grabbed Gross Profit rather than Total Revenue, this page shows a 1.1% profit margin: https://finance.yahoo.com/quote/KR/key-statistics?p=KR


Thanks I was looking for this.


One solution is to have the key on paper in a safe, and then let the lawyer know the key is in the safe. If you die they can drill the safe. The nature of the private key makes digital solutions possible, but they aren't necessary. It doesn't have to be handled differently from any highly valuable small object.


Depending on the amount, split the key across paper across multiple bank vaults and lawyers, with direction to contact all of them and bring the key together at your death.

But good luck finding someone you can trust to actually handle the money once they have the key.


One cool aspect of Shamir's Secret Sharing is you can set any threshold for how many fragments are required to recover the secret. This reduces the risk of one losing the secret due to fragments being lost. The scheme also has perfect secrecy, so gaining a few fragments, but not the threshold amount, gives an attacker no information about the secret.

https://francoisbest.com/horcrux


I wouldn't split the key because as another comment noted, you don't need all the pieces to brute-force the rest. Rather I would have several "keys" that when you XOR them all together, you get the real key. That way, any piece is useless without all the rest.

Unless, this is what you meant by "split" in which case I agree.


Even just putting half the key on paper and not putting the rest could make brute-forcing the rest feasible. Even knowing just 1 bit makes brute-forcing 2x as easy. 8 bits? 256x easier, etc.


One would use a scheme like Shamir's secret sharing [1], not literally cutting the exact bits of the key into strips.

> To unlock the secret via Shamir's secret sharing, a minimum number of shares are needed. This is called the threshold, and is used to denote the minimum number of shares needed to unlock the secret. An adversary who discovers any number of shares less than the threshold will not have any additional information about the secured secret-- this is called perfect secrecy. In this sense, SSS is a generalisation of the one-time pad (which is effectively SSS with a two-share threshold and two shares in total).

[1] https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing


(Shamir’s scheme is delightfully straightforward, but if polynomial interpolation over finite fields isn’t a thing you feel in your bones, try inventing an n-of-n-shares scheme that only uses xor and a random-number generator. Gb nyy ohg bar bs gur cnegvpvcnagf, tvir n puhax bs enaqbz qngn nf ybat nf gur frperg; gb gur ynfg bar, tvir gur kbe bs gur frperg naq gur enaqbz puhaxf. You probably don’t want that in production, but it’s nice to figure it out and even utterly simple to prove it secure, provided you understand the proof for one-time pads.)


This immediately came to mind as a possible tactic because polynomial interpolation is covered nicely in A Programmer's Introduction To Mathematics[1] which I started reading recently. Highly recommended.

[1]: https://pimbook.org/


Oh yeah, I know about that. I meant to intentionally release only part of the key specifically to make brute-forcing easier for your heirs. I mean, hey, they gotta work for it, you just give them a leg up! :)


Crypto dead mans switch like sarcophagus.io.

You can connect to obituary oracle on chainlink and release data to prespecified law firm upon proof of death. Then make sure the law firm validates the death before opening. Wallets and Keys inside. Or secret pass phrases inside.


When all you have is a crypto hammer, everything you see is blockchain nails.


Or (somewhat ironically) a bank safe deposit box.


This isn't that ironic, as there are often safe deposit boxes with contents more valuable than the cash on hand of the bank branch itself.

Yes, ironic that digital currency is being protected by physical bank, but that's really stretching for something to be haha. It's SOP for banks really.


I was more referring to the (somewhat fair) crusade against big banks in the crypto community in general. Tweeting against banks all day and talking about "code is law" while paying a safe deposit box fee and leaning on the traditional legal system (wills, etc) and banks (the box) scales somewhere from ironic to hypocritical.


No it doesn’t, at least in a sensible understanding of the crusade. (I’m not sure cryptographic Byzantine consensus is the panacea it is touted to be, but agree with its proponents as to whether many of the things they call problems with the traditional system are in fact problems.) It’s nice to have a technical solution to things that do not actually need human interpretation, and it’s nice to expand the set of these things. Whether you actually want human interpretation for the act of transferring money is questionable.

Fiat money is uniquely susceptible to repressive governments in a way that nothing was when people actually thought about countering those in a practical way, and bank transfers are even more so—see today’s news from Canada for an example that’s chilling whether or not you agree with the actual politics in play. That needs to be fixed, I think. It could be fixed by making money more resilient to government intervention or by making governments less likely to make malicious interventions, probably both. These approaches, and even approaches to these approaches, have different implications, so history will have to find the balance, but I’d be loath to just dismiss the former out of hand.

But death is a thing that needs human interpretation, at least for the foreseeable future, and thus those arguments don’t apply here. The current banking and actuarial system isn’t that insane for the most part, for a system that has to operate under the constraint of needing human interpretation. It’s just that I refuse to stop thinking about the extent to which such a constraint is actually present in any particular situation. In strongbox rental, it is. Great! And I say that as someone with an experience of withdrawing the contents of a safe deposit box from a branch of a failing bank, on the day before the doors of said branch were locked and tagged.

(Nothing about a strongbox rental business even needs to be connected with loans or securities in any way, it’s just that banks sort of organically grew both functions. No problem with that, but also no problem with somebody dissatisfied with any aspect of modern macroeconomics having no gripes against safe deposit boxes.)


I think the parent was pointing out at the irony that every hour of effort spent on the crypto space so far has only made banks and other institutions even more critical in the end, because the death rate will be 100% for the forseeable future, like it always has been.

It’s effectively backloading risk onto the very things claimed to be outmoded and replaceable.


Not ironic at all. It's a fact that banks have great physical security, no reason to not take advantage of that. If you have a cryptocurrency paper wallet in the bank, they don't know about it, it's not on their books, they can't lend it out without your knowledge and inflate the economy with it.


Note that banks usually require you to sign away all liability for anything placed in a safe deposit box, even if negligence or fraud on their part leads to the items being stolen or destroyed.

The risk is very low, but it is present.

See: https://www.nytimes.com/2019/07/19/business/safe-deposit-box...


That seems like owning small valuable objects with extra steps


One thing that would make a difference on small accounts is the ability to do prepaid billing only. That way you define your budget in advance and they enforce it. The problem with the current billing is that people who are new to the system have no hope of understanding what is going on and they have to accept the open-ended nature of the billing system to learn.


One issue is what gets shut down when you hit the max? If you have an EC2 instance running and you hit the max, do they shut it off? Would customers understand that and be ok with it? What if you have an S3 bucket? Should they just delete the data? That's probably not what you want.

You can basically do max bill now -- you can set up a cloud watch alarm for billing and when it reaches a certain price, run a script. Your script could just shut everything down and delete everything, or do whatever is appropriate for your account. That's their solution to this.

Also they don't have instant feedback on usage -> cost. They batch process it. So if you get a huge spike in usage, AWS may not even know that for a while. They could in theory be willing to eat the cost of usage between it happening and their processing, but are probably unwilling.


There must be a process for unpaid bills, whatever that process is they could just enact it at the user's threshold instead of their own threshold. Ideally a soft limit that would disable networking and resource creation and then later a hard limit where your account is wiped out.

Because of the potential overhang before the billing system catches up I think it would be appropriate to lower the service quotas on this type of account. I'm not sure if the customer can lower their own quotas which would be an alternate cost control strategy but a beginner wouldn't know to think of that anyway. The solution with billing alerts is good at a company level but too much for a beginner in my opinion.

I know unexpected costs were a concern for me when I started using AWS as a student in 2008 and it is still a concern for people in the same situation, just with so much more complexity on top of it all. It will be a tiny fragment of their revenue but as time goes on a higher and higher level of expertise is required to get started, even though you can accomplish a lot with just the free tier. The amount of progress they've made on this issue in the last 13 years is just not impressive.


> There must be a process for unpaid bills

They lock your account but keep all the resource active and the bill just keeps going up until you pay to get back in.


Yeah, shut off EC2 instances, block access to all resources, etc. Preserve bucket data and other storage for N days or $M max allowance (ultimately billable) before deleting. AWS could limit how much storage they make available to someone with a budget, reducing their risk substantially.

It's pretty easy stuff, IMO, but the upside for them is low -- after all they are already #1.


That might work for you, but not everyone. They might even have legal trouble with such a system, if they delete data that was required to be retained for example. You're not thinking of all the edge cases.


Why is data remotely their problem?

If you don't have your data backed up in something other than Amazon, Amazon is the LEAST of your worries.


AWS Lightsail is pretty close to this. It’s still possible to get an overage if you have a lot of traffic, but otherwise it’s pretty safe.


For those of us who are helplessly impatient, run this in console:

    document.querySelectorAll(".countdown-calendar__door").forEach(e => e.classList.add("will-open"))


Here's all of the titles:

Arnold Schoenberg

W. B. Yeats’ Estrangement

Vladimir Nabokov’s Mary

Sinclair Lewis

A. A. Milne’s Winnie-the-Pooh

Faust directed by F. W. Murnau

Agatha Christie’s The Murder of Roger Ackroyd

D. H. Lawrence’s The Plumed Serpent

Igor Stravinsky

Don Juan directed by Alan Crosland

Louis Armstrong

Battling Butler directed by Buster Keaton

Diane Arbus

Oscar Micheaux

William Faulkner’s Soldiers’ Pay

Dorothy Parker’s Enough Rope

Zora Neale Hurston’s Color Struck

Jim Morrison

Arthur Conan Doyle’s The Land of Mist

Stevie Smith

Ivor Novello

Miyamoto Yuriko

T. E. Lawrence’s Seven Pillars of Wisdom

Sound recordings published prior to 1923

The Scarlet Letter directed by Victor Sjöström

Franz Kafka’s The Castle

Ludwig Wittgenstein

Vita Sackville-West’s The Land

André Gide

Bertolt Brecht’s Man Equals Man

Ernest Hemingway’s The Sun Also Rises

$$('.door-interior span.title').map(x=>x.textContent).join("\n")


Oh great! I have been looking forward to $$('.door-interior span.title').map(x=>x.textContent).join("\n")


The only title on the list I recognize, such a timeless classic.


The movie was okay but I never pictured $$ as looking like Chris Pratt


The casting of () seemed consciously inclusive, in a good way


Was it written by Little Bobby Tables?


Then this to show all of the titles

    document.querySelectorAll(".door-front").forEach(e=>e.remove())


I love this, and wish there were a community browser that auto-offers the most popular JS hacks to fix UX. Similar this can be done for JS paywalls and to get rid of annoying newsletter and GDPR boxes without agreeing to them.


That's the problem ViolentMonkey and sites like https://greasyfork.org/ are trying to fix but as with many "community contributions" the quality is all over the place


I mainly use ViolentMonkey for my own scripts


it's super underrated I think, I customize sites quite regularly now to fix things that annoy me! Either to remove things, or to modify content so it takes advantage of a big monitor


> or to modify content so it takes advantage of a big monitor

here's looking at you, GitHub diff div

    document.querySelector(".application-main .container-xl").style.maxWidth="100%"
/me shakes his fist


As do I. I've never published anything on any community userscript website mostly because I am scratching my own itch(es) and find it suspicious that anyone else would have the same itch and yet want it solved in exactly the same way


There's a lot of people out there. Some might want things to behavior a certain way, but then come across your way and decide it's better, or at least, good enough. Sharing is caring


Is THAT what's going on? What an awful, miserable, hostile website.

Please stop trying to do silly Javascript tricks and just give me text and pictures.


Won't somebody please think of the engagement


publicdomainreview.org is a good site. It has obviously been a labor of love for many years and I've never known them to do shitty engagement tricks.


I'm optimistic about electrical. Here is a high level analysis of power and energy density required for some electric vertical takeoff aircraft, compared to some existing batteries: https://www.pnas.org/content/118/45/e2111164118 (h.t. kittyhawkcorp twitter). The necessary power density is achieved, the energy density needs to improve by about 2x for these vehicles to attain their intended range.

The article also compares the range and energy efficiency to electric and ICE vehicles, accounting for the distance reduction by flying in a straight line versus driving on the road. If I recall it doesn't apply any extra value for time savings. The overall energy used in flying could be as little as 2-3x the energy used driving a terrestrial electric vehicle. Combine that with vertical takeoff and no traffic and we're looking at something pretty compelling.

And how much does it really need to cost compared for example to a Tesla? The weight will be more optimized and the safety regulations I assume are much sterner. The technical complexity seems similar but the volume will be much lower. I don't think it really works if you need a pilot's license so full autonomy is probably also a prerequisite for an everyday application.

I think EVTOL will still be embryonic in 2 years, but impressive in 5 years.


Autonomous vehicles can't drive in a tunnel yet, and you're imagining flying autonomous cars in 5 years?

Flying cars already exist, they're called helicopters, and they are not a promising consumer technology, and never will be. Flying heavy materials (such as human flesh and bone) is far too energy intensive and inevitably produces too much noise. It is also far too dangerous to become a consumer technology.


1. Autonomy in the air is far easier than on the roads

2. Helicopters have a much lower L/D ratio in forward flight than the tiltrotors being proposed for eVTOLs.


1. Yes, but autonomy in a fixed tunnel would be easier still, and yet the Vegas Loop is using human drivers.

2. Helicopters have the advantage of actually existing in many varied form factors and designs, in common civil use, unlike tiltrotors.


Yes, in the air I'm expecting a very different scenario than on land. No pedestrians just an element of collision avoidance for birds, all vehicles legally required to be broadcasting their position to an automated air traffic control, probably maintaining 100 meters between vehicles versus road traffic being 1 meter from oncoming. In the air it would be almost a pre-planned route with a 50 meter collision avoidance corridor. Plus an emergency landing site selection process, potentially the ability to land on water or an emergency parachute.

Part of the reason I found the paper I linked to be persuasive is that they predict the EVTOL aircraft only needing 2-3x the total energy of an EV. The energy cost in dollars could be less than gas for an ICE vehicle making the same trip. There are very light aircraft with 100HP engines, I'm picturing something light and birdlike.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: