Hacker News new | past | comments | ask | show | jobs | submit | brandall10's comments login

He also directed Bumblebee, perhaps the only great film in the Transformers universe.

Esp. on longer ISS trips. I can't believe this hasn't already been a sanctioned experiment.

I can believe it. Dunno how reasonable it would be to risk having someone in an altered state of mind on the ISS. What do you do if someone has a bad trip or decides to make poor decisions?

> What do you do if someone has a bad trip or decides to make poor decisions

The space shuttle had a hatch that could be opened to vacuum and was actually padlocked shut on some flights where the mission specialists weren't trusted by the commander [0].

[0] https://arstechnica.com/space/2024/01/solving-a-nasa-mystery...


If you mean STS-51-B, the commander duct taped the hatch. After that flight a padlock was added for a more permanent solution.

  > “I remember waking up at the beginning of a shift and seeing duct tape on the hatch," Gregory told Ars. "I did not know what the origin of it was, and I didn’t pay any attention to it. I may have, but I don’t recall asking Overmyer about it.”
Also this detail gives me pause:

  > The Space Shuttle has been retired for 13 years, but the padlock remains in the fabric of US spaceflight with Crew Dragon. A commander's lock is an __option__ for NASA's crews flying to the International Space Station on Crew Dragon, as well as private missions.
Why even make it optional? Seems better to officially take responsibility out of the commander's hands and make the lock mandatory. This would greatly reduce risk to onboard trust dynamics, since the commander is just following orders instead of signaling mistrust toward the crew.

This seems like one of those "who needs all those lifeboats??" situations. Why not avert the Titanic now?


Similar to what's been done on earth during such experiments. Just have to do it in a controlled way in a prepared environment, and have both an administrator and a person likely versed in taking the substance to minimize risk. Of course with just a sample of one it won't yield much scientific benefit.

I'm not denying that intoxication leads to more bad decisions.

But sober people also make many bad decisions. A space ship needs to be resilient to bad decisions, and I assume/hope the ISS already is well fortified in that respect.


Typical bad decisions, especially by a professional crew of trained flight engineers, are quite different from "bad trip" bad decisions. Totally different risk vector and one they should have spent precisely zero seconds thinking about in the ISS's design.

It took only 17 Space Shuttle flights before a crew-member threatened to kill themselves causing them to add additional locks to the hatches. I think a few seconds considering what a malicious actor could do would be worth it.

We're not discussing a malicious actor, we're discussing someone having a bad trip on LSD. It is a completely different thing.

The unpredictability means they might as well be a malicious actor.

Not really. Someone panicking on acid would do different things than a sober person trying to knock the ISS out of orbit. Obviously.

'Not really' - best not bet the lives and a huge amount of money on 'Not really'

I don’t know what argument you’re trying to make. The question is: should the ISS be hardened against someone on LSD. My answer is: that is fucking absurd, and generic hardening against malicious actors is tangential at best.

...we're not talking about force-feeding someone a fifth of whiskey here.

A stable, thoughtful, trained scientist, in the company of supportive and emotionally intelligent colleagues, does not present a significant risk when fed a normal dose of LSD.


They absolutely can present a significant risk when fed a normal dose of LSD. And I'm no prude when it comes to this stuff. Very standard advice for psychedelics is set & setting, and "don't be physically trapped in an environment you might find uncomfortable" is a reasonable derivation.

Don't be in the endless void of space which is already emotionally and psychologically taxing on human beings who are verifiably in peak physical and mental condition

Seems like a pretty fucking bad set and setting honestly. Regular astronauts already deal with powerful feelings in their normal job. Adding a drug to that is a bad plan.


Even airplanes aren't resilient to truly bad decisions:

https://www.nytimes.com/2023/10/24/us/alaska-airlines-off-du...


> What do you do if someone has a bad trip or decides to make poor decisions?

The same thing you do on terra firma. You do your best to keep them calm, and talk them down. I've never had to physically restrain anyone, but seems like that would actually be easier to do on the ISS. Just wrap them up and let 'em float around

I'd also expect bad trips to not be as big of an issue in a NASA sanctioned experiment with a much more closely controlled dose vs some tab you got off the guy in a tye dyed t-shirt at the concert type of situation.


> I've never had to physically restrain anyone, but seems like that would actually be easier to do on the ISS. Just wrap them up and let 'em float around

A great way of restraining someone is pushing them against an unmovable object. On Earth, that is either the floor or a wall, then you can pin them somewhat against that, at least restricting them somewhat.

Walls and floors are the same in space, and the person can literally float away with six degrees of freedom, at any time.

I feel like physically restraining people would be harder in zero gravity than non-zero.


I was thinking simply a couple of bungee cords wrapped around them to keep them from flailing and then just let them drift around.

What's nice about the ISS is there's no way for someone to do something accidentally dangerous in there.

Are you being sarcastic?

That seems crazy. There are like 10 people in space at any given time. The idea that we might have sent someone mentally unstable given that we can be that selective is mind blowing. How in the world can you qualify for this if you aren't the best of the best of the best?

I don’t see mental instability as an independent defect but as a byproduct of some of the long tail aspects needed to be the best of the best. To put another way, many people are motivated to fill a void, that same void can cause mental instability.

I agree to not do it casually

But it could probably be arranged under supervision with an anesthetizer option.

They probably already have some protocol for antisocial or psychotic behavior.


I agree to not do it casually

But it could reasonably be arranged under supervision with an anesthetizer option.

They probably already have some protocol for erratic behavior.


People make poor decisions regardless of drugs.

I feel you don’t know enough about acid trips.

If my spaceship commander just told me he had taken 2 tabs, I would start writing my obituary.


Copilot isn't particular useful. At best it comes up with small snippets that may or may not be correct, and rarely can I get larger chunks of code that would be working out of the gate.

But Claude Sonnet 3.5 w/ Cursor or Continue.dev is a dramatic improvement. When you have discrete control over the context (ie. being able to select 6-7 files to inject), and with the superior ability of Claude, it is an absolute game changer.

Easy 2-5x speedup depending on what you're doing. In an hour you can craft a production ready 100 loc solution, with a full complement of tests, to something that might otherwise take a half day.

I say this as someone with 26 yoe, having worked in principal/staff/lead roles since 2012. I wouldn't expect nearly the same boost coming at less than senior exp. though, as you have to be quite detailed at what you actually want, and often take the initial solution - which is usually working code - and refine it a half dozen times into something that you feel is ideal and well factored.


> I wouldn't expect nearly the same boost coming at less than senior exp. though, as you have to be quite detailed at what you actually want, and often take the initial solution - which is usually working code - and refine it a half dozen times into something that you feel is ideal and well factored.

Agreed. I feel like coding with AI is distilling the process back to the CS fundamentals of data structures and algorithms. Even though most of those DS&As are very simple it takes experience to know how to express the solution using the language of CS.

I've been using Cursor Composer to implement code after writing some function signatures and types, which has been a dream. If you give it some guardrails in the context, it performs a lot better.


The one thing I'm a little concerned about is my ability as an engineer.

I don't know if I'm losing or improving my skillset. This exercise of development has become almost entirely one of design and architecture, and reading more than writing code.

Maybe this doesn't matter if this is the way software is developed moving forward, and I'm certainly not complaining in working on a 2 person startup


Which do you prefer Cursor or Continue.dev?

Honestly haven't tried out Cursor yet, it looks impressive but I've heard it has some teething issues to work out. For my use case I'd end up using it very similar to how I use Continue.dev and probably pay for Claude API usage separately, which has been working out to about $12-$15 a month.

Human + ai writing tests >> human writing tests

The problem is the perception of a very rare skillset - ie. being a CEO that did x/y/z at companies in a similar space that the board would like to acquire to achieve for their own company. If they can accomplish the same, the value of that far exceeds the cost, environmental impact, and public perception of something like this.

You might start with 5 or so candidates and then whittle it down to 1 or 2 that are the right person for a job. These people are already wealthy with all that goes along with it and the idea of relocating could be a hard ask. From a successful CEO's perspective, it's probably much easier for them to land another gig that caves to their demands than the other way around.

At this comp level you could say it's similar to a sports star that gets traded to another team, and it is in a way, but that's part of what comes along with being someone like that... in the span of 20 years they may have to uproot their life 3-4x. Their career even starts off like that, where they get selected to a team and boom, they have to move to somewhere they may have never even visited before their life even gets somewhat calcified by having generational wealth.


Right, otherwise the word "actual" wouldn't be in the notebook, which implies computer scientists were actively using the term prior to the event.


The wikipedia entry for this confirms two sources depending on context, and in your case it came from a late 80s paper:

Sarin, DeWitt & Rosenberg, Overview of SHARD: A System for Highly Available Replicated Data, Technical Report CCA-88-01, Computer Corporation of America, May 1988

https://en.wikipedia.org/wiki/Shard_(database_architecture)#...


From a quick search right now, the term seems to have come from that same system, but the first reference appears to be older than 1988. It looks to be possibly 1985.

Following this link here: https://shkspr.mobi/blog/2021/06/where-is-the-original-overv...

In a comment at the bottom it references a paper published by a few people working jointly with Computer Corporation of America, MIT & Boston University.

If you view that referenced paper "Correctness Conditions for Highly Available Databases" by N. Lynch, B. Blaustein & M. Siegel (https://apps.dtic.mil/sti/pdfs/ADA171427.pdf), and look at section 1.2 it clearly describes "SHARD: (System for Highly Available Replicated Data)" as being underdevelopment at CCA. It also says if you want to learn more about Shard, see the paper's reference [SBK]. Checking out the references section of that paper it lists the following for [SBK]:

Sarin, S. K., Blaustein, B. T., and Kaufman, C. W., "System Architecture for Partition Tolerant Distributed Databases," IEEE Transactions on Computers C-34, 12 (December 1985). pp. 1158-1163.

Which means there was a paper published in 1985, describing the in development Shard system.

It is possible that in 1985 they hadn't yet named the system "Shard", and it only got that name by 1988 - but it'd be interesting to check out that 1985 paper and see if they used the term Shard at all.


Great work. It's esp. interesting that this is an acronym. You should submit a correction.


HN post here claims this paper may not exist https://news.ycombinator.com/item?id=36848605


That comment is referring to a different paper than the one I mentioned.

I seem to remember a conversation here on HN not too long ago where people tried to reconstruct the history behind "sharding" and, in particular, tried to find that 1980s paper you mentioned – without success. I believe they even contacted one of the authors.


Yes, that was this discussion:

https://news.ycombinator.com/item?id=36848605


Actual paper did exist but no one has a copy anymore according to the contacted author. It was an internal Xerox memo.


FWIW, I do now have a copy. I'm unsure of the copyright status so I'm a bit reluctant to share it.


> I'm unsure of the copyright status so I'm a bit reluctant to share it.

I can totally understand that, but at the same time I'd really love to have a look at that as well. Maybe you could reach out to archive.org and see if they'd be interested in hosting a copy?


Congratulations! How did you end up getting the copy?

Also, maybe university libraries and/or Archive.org could help you with the copyright question?


A commenter on my blog made contact with someone who had a copy.


I found that paper title as well when looking into this exact question. That paper does not have the number of citations I would expect if it is the source of the term. It's possibly the source, but it's not obviously the source.


"Why? There’s no definitive answer."

I'd argue act of immigrating itself is risky behavior, so naturally that would be a less risk averse group.


I wonder how much of that is reflected in the differences between the USA and for example Europe (where the USA for a long time was mostly European immigrants)


Anyone know why Homebrew overtook MacPorts? I only have a vague recollection of a Rails colleague pushing me to switch circa 2013 or so and haven't given it much thought since, but it (MacPorts) seemed to be similarly ubiquitous prior.


When I started using a Mac in 2009, MacPorts, Fink (and I think there was another I can't recall the name) simply wouldn't work for me. They would take very long to build what I wanted, there weren't nearly as many packages as was in Debian/Ubuntu, and many were old versions. Worse, many build attempts would just fail.

In that scenario, brew worked like a charm. It was quick, had most or even more packages than Debian/Ubuntu and they were newer. Failure to install was rare.

Then, Apple started yearly release of OS X, and that both broke brew and my system hard, so I started investigating and found out about the many "shortcuts" that brew took and how it violated systems components. I was dismayed, and abandoned brew for good.

So, I stood a period where I would use many of my tools inside a Ubuntu VM, until probably 2013-2014, when for some reason I tried again MacPorts, and I don't know why, but that time it was much more reliable, and because of Apple's insane atm SSDs with 2 GB/s bandwidth, install became quick enough. Packages were still somewhat lagging behind in available versions, but the variety of them kinda reached the levels of what was in Debian/Ubuntu, so it was good enough for me.

Then, the killer feature, I found out about macports variants and selectors, which I find the most awesome thing to this date in package managers (I haven't tried nix, still, it might be magnitude better in that regard). No needing to use rvm, pyenv, custom installs of gcc messing with make/autotools, and the only sane way of compiling various Haskell projects (before haskell-stack).


I don't know when they introduced it, but I believe MacPorts will build the common variants of the more-used packages. So, if you install a package with the default variants, you'll get a binary download instead of building from source.

But indeed; fast SSDs, parallel compilation, and modern CPUs really help!


I think MacPorts builds basically everything and offers it as a binary if they think they can distribute it legally


MacPorts was slower (bringing in its own dependencies for everything meant longer build steps) and required sudo more. There were some annoying fiddly parts that made it seem like the homebrew users around you were having more fun exploring packages.

It was also exciting how many packages and casks were in homebrew and it was easy to make your own.

Also, back then there were lots of people experiencing package managers for the first time and they took to homebrew easily.

Then so many projects started to publish brew install links as a way to get started; homebrew felt like a default.

Now, with our faster computers, more space, and more packages installed, and macports shipping more binaries and using its own normal user, macports' duplication of dependencies looks more like an advantage than a disadvantage. And because homebrew taught so many people how to use package managers, macports is not their first so easier to start using.


> Also, back then there were lots of people experiencing package managers for the first time and they took to homebrew easily.

I suppose it was almost 15 years ago now but this is what I recall. Homebrew was easier, snappier, and the general friction coefficient felt smaller.

It's a little funny reading this and then wonder... Why did I leave MacPorts behind? I don't think I put much thought into it at the time and rather went by feel. I was still somewhat new to this stuff having started my career more in design than development.


I’d push back slightly on the “more space” claim due to Apple’s notorious stinginess for SSD & RAM.


Here's my guess.

Homebrew had at least these things going for it:

  - it has always had a strong emphasis on presenting a simple, clean, pleasant, pretty, playful UI and executed that well
  - when it came out, source-based package managers for macOS generally didn't have any binary caching mechanisms, so compile time mattered
    - Homebrew's embrace of the base system as opposed to bringing its own dependencies bought it greater reuse at the cost of robustness, driving down total time to install many packages
  - the language that `brew` and its packages were written in was trendy at thw time as well as pre-installed on macOS, which made them instantly accessible to huge numbers of web developers
    - the older macOS package managers generally drew on traditions and tooling from the Linux world (e.g., Fink, with Debian tooling) or the wider Unix world (e.g., MacPorts and various *BSD ports systems and packages written in some Tcl IIRC).

The type of person with the experience that would lead them to prefer tools and conventions like one sees in Fink, MacPorts, and Pkgsrc, or to contribute to those projects, has likely always been dismayed, if not repulsed, by a number of Homebrewisms. I think we can therefore conclude that Homebrew didn't win the package availability race by converting MacPorts contributors— Homebrew succeeded in attracting a largely untapped pool of new contributors. Eventually there followed the majority of non-contributor users who just want to use whatever already offers the software they want to run.


At the point I switched from MacPorts to Homebrew, homebrew just worked more reliably in my experience. It installed things quicker and with fewer build/install failures. i don't know enough about what was going on under the hood to have any theory as to why this was my experience; I don't want to know what's going on under the hood, I just want to type `install whatever`, and have it work.


I guess MacPorts was (and is) geared more towards users with some proper UNIX or BSD background, e.g. people coming from FreeBSD.

Whereas Homebrew targets the typical Mac user who might need a CLI application occasionally, i.e. someone looking for simplicity, without being too technically savvy.

The latter group certainly makes up a much bigger share of users on macOS, especially nowadays.


Here’s why I switched early on in homebrew’s life from ports

- brew had and has many more packages available

- brew updates versions more quickly

- brew uses much more simple paths that fit my brain better

- brew has a pleasing simplicity


When you installed a port with macports the idea was to use as much of the macports for build and runtime dependencies. Over time that became greater and so port install would be slow until you built enough dependancies. It also consumed more storage.

When you installed a port with brew it used as much of the OSX, X11, and XCode installed utilities as possible so it was faster and used less storage. But then you would install an update from Apple and things would break cause of that reliance, things like /usr/bin/perl.


People like beer but also Homebrew had a cute site and made ports simpler than MacPorts. Turns out complexity was maybe not unwarranted. I was among first adopters of brew but now I port for years


this was a while back, in the Gentoo Linux heyday, so it was popular to compile things, except that this was when computers were slow, so that meant waiting for compiles. the problem with macports was that (iirc, it's been a while) it compiled its own version of Python instead of just using the system python, which also broke sometimes. and then you had to compile all that shit again. brew won out because it was faster, and didn't duplicate redundant shit for no perceived reasom.


I recently worked for a company called Tandem Diabetes which has multiple closed loop, FDA-regulated systems going back 9 years:

"In July 2014, Tandem announced that it had submitted a PMA for the t:slim G4 insulin pump, which integrated t:slim Pump technology with the Dexcom G4 Platinum CGM System. This device was approved by the FDA in September 2015."

https://en.wikipedia.org/wiki/Tandem_Diabetes_Care

We were still working on international support when I left last year. As you can imagine, there are quite a few regulatory hurdles esp. regarding patient data portability and access.


Neither phrase is causing the LLM to evaluate the word itself, it just helps focus toward parts of the training data.

Using more 'erudite' speech is a good technique to help focus an LLM on training data from folks with a higher education level.

Using simpler speech opens up the floodgates more toward the general populous.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: