Hacker News new | past | comments | ask | show | jobs | submit | ralegh's comments login

Interspecies communication is a massively underrated field.

We've bridged human cultures in the past, which is easier because humans do similar (ish) things, we can use sight, touch, smell, etc to establish common ground.

We can communicate simple things with pets, though in my experience they learn from body language and intonation, understanding grammar and language feels like a step further.

What's the common ground with whales?

Like how eskimos have 100 words for snow, whales could have thousands of phrases for water, currents, temperature, storms. Fish, migration of different species. A language of relative position needed for pack hunting. They might tell stories about El Niño, earthquakes, tsunamis.

If they have social structure we may share ideas of relationships, friendship, giving (food), owing, sharing, helping, etc.

We might be able to correlate their speech with weather patterns and animal sightings. We could probably start a two way communication, I wonder if us or them would have better forecasts for sea conditions. They could act as a network of hundreds of thousands of sensors.

Sperm whales travel so far, that even without maps they might know the shape of the continents.

Very excited for the future.


From what I understand that 100 words for snow thing isn't really true, it's just that the language uses compound words like german, so it's just that an adjective + noun is rolled into one word (e.g. powder snow would also be pulverschnee in german, but it's really just the word for powder and snow without the space)


More than you ever wanted to know:

The snow words myth: progress at last - https://languagelog.ldc.upenn.edu/~myl/languagelog/archives/...

Bad science reporting again: the Eskimos are back - https://languagelog.ldc.upenn.edu/nll/?p=4419

"Words for snow" watch - https://languagelog.ldc.upenn.edu/nll/?p=3497

The Great Eskimo Vocabulary Hoax - https://web.archive.org/web/20181203001555/http://users.utu....

"Eskimo Words for Snow: A Case Study in the Genesis and Decay of an Anthropological Example" - https://www.jstor.org/stable/677570

https://en.wikipedia.org/wiki/Eskimo_words_for_snow


As a brief aside, I think "Inuit" is the preferred term for people mistakenly called Eskimo.


As the first article linked to explains:

> the language family is generally called Eskimo or Eskimoan, because it includes the Yup'ik languages of Siberia and Alaska as well as the Inuit languages from the northeastern half of Alaska across Canada to Greenland


And I've heard Eskimo is a preferred collective term for North American indigenous arctic dwellers, because Inuit is just one tribe/ethnicity among a few!

So it goes.


Just think about it from the other direction.

To most Australian Aborigines, whites are called "Hollanders".

How much are you offended by that?


> How much are you offended by that?

Tremendously :P where I'm from "hollander" is a type of pipe fitting.


Well, you should have discovered Australia first then.


.. but I am thinking about if from the other direction.

From the perspective of the non-Inuit Eskimos.


Often people claim that Hungarian has over 50 words for "you" (https://dailymagyar.wordpress.com/wp-content/uploads/2019/05...)

But even the concept of what is a "word" in Hungarian is complex. Words have so many different forms depending on the context. As a Hungarian speaker, I perceive three forms of you (te, ön, maga) and one ending that is added to other words to to form a second person form of the word (-d), but even that is a complete oversimplification.

I don't think there is any reason why whale languages would have a concept of discreet words like we have in English.


Fun fact: That is why linguists are more interested in spoken language than written language. Written languages are ultimately "amateur" attempts to codify spoken (natural) languages. Spoken language consists of utterances not letters, words or sentences. Analyzing language requires grouping sounds into compounds that serve specific functions or carry specific semantics but for spoken language the structure will be a lot fuzzier and more complex than for the simplified written language even if the author attempts to replicate spoken language in writing. Even phonemes don't tell the full story.


A list of 65 English words/phrases for types of snow by a skier: https://skimo.co/words-for-snow but missing a few "spring snow", "Sierra cement".


Ok, so let’s say there are ten qualifiers and ten base words for snow. That makes 100 compound words.

It’s still substantially more snow-related vocabulary than in German, it seems to me.


Phrases, then.


I like how you start out with "their experience is so alien how can we even expect to have words for their relevant concepts", and then proceed to list a bunch of words for their relevant concepts.


I can understand why scientists, linguists, and whale enthusiasts might be interested in understanding whale communication. But I have a much harder time imagining that whales have much to say to humans other than, "Please kindly fuck off" in 99% of cases.


Dogs have a lot to say to humans. Most of it also applies to whales:

"Feed me"

"I'm hurt, help me"

"I found the thing you wanted. How about a reward?"

If you can solve the whale's problems, then it will have things to say to you.


Dogs have lived with humans for millennia. The majority of whale individuals can probably solve their own problems far better than humans can.


It might be true that the filter feeder whales are better at feeding themselves than we are at feeding them. But probably not.

For the predatory whales, it's definitely untrue.

The reason dogs live with humans is that it's easier to have their problems solved by the humans than to do it themselves. That's why the phenomenon continues and why it began.


I think finding out what they're saying about (rather than to) us would be amazing


Great! Something I've always wanted.

I'd love to be able to use a bit more type-y Go such as Borgo, and have a Pythonesque dynamic scripting language that latches onto it effortlessly.

Dynamic typing is great for exploratory work, whether that's ML research or developing new features for a web app. But it would be great to be able to morph it over time into a more specified strongly typed language without having to refactor loads of stuff.

Like building out of clay and firing the parts you are happy with.

Could even have a three step - Python-esque -> Go/Java-esque -> Rust/C++esque.


> Like building out of clay and firing the parts you are happy with. > Could even have a three step - Python-esque -> Go/Java-esque -> Rust/C++esque.

We do exactly that with Common Lisp. It compiles to different languages/frameworks depending on what we require (usually sbcl is more than enough, but for instance for embedded or ML we need another step. All dev (with smaller data etc) is in sbcl so with all the advantages.


Is there somewhere I could read more about this? I've always wanted to learn Lisp but lacked a specific need for it.


We don’t necessarily do good lisp things; we use Common Lisp because macros and easy DSLs allows us to use CL for everything we do while using, for us, the best dev and debugging env in the world. So we want to do the exploration, building, debugging all in CL and after that compile, possibly, to something better depending. We trade for that a little bit of inconvenience (as in; leaky abstraction), but it’s worth it the past 30+ years.

For learning cl, the reddit lisp subreddit is good and has the current best ones on it. Lately there is a guy making a gui (clog) who is doing good work for spreading general lisp love by making it modern. Including tutorials. And there are others too.


Dart? Version 1 was a lot like Javascript/Typescript in one spec (a dynamic language with optional unsound typing). Version 2 uses sound typing, but you can still let variables unannoted (and the compiler will infer type "dynamic") for scripts.


Sounds like JavaScript and typescript would be a good fit for you. Highly expressive, dynamic and strongly typed, and highly performant both on server side and within the browser.


I do like JavaScript but it strikes a weird balance for me where it's a bit too easy to write and a bit too verbose so I tend to end up with hard to maintain code. Feels good at the start of a project but rarely a few weeks in. Also not a fan of the node ecosystem, I try to use deno where I can (maybe that would be bun these days).


perhaps rescript [https://rescript-lang.org/] even more than typescript


I like the idea but in all honesty I have difficulty imagining it working in practice. Once your python code is stable (i.e. You've worked out 99% of the bugs you might have caught earlier with strict type checking) would there be any incentive to go back and make the types more rigid or rigorous? Would there be a non-negligible chance of introducing bugs in that process?


by the time you have your code in its final state (i.e. you're done experimenting) and shaken out the bugs, your types are mostly static; they're just implicitly so. adding annotations and a typechecker helps you maintain that state and catch the few places where type errors might still have slipped through despite all your tests (e.g. lesser-used code paths that need some rare combination of conditions to hit them all but will pass an unexpected type through the call chain when you do). it is very unlikely that you will introduce bugs at this point.


I agree it's a bit of a pipe dream. I'm more thinking of performance here, e.g. web services using Django. You could start off in dynamic/interpreted land and have a seamless transition to performant compiled land. Also lets you avoid premature optimisation since you can only optimise the hot paths.

Also types are self documenting to an extent. Could be helpful for a shared codebase. Again Python just now getting round to adding type definitions.

At the end of the day good tooling/ecosystem and sheer developer hours is more important than what I'm suggesting but it would be nice anyway. I dream about cool programming languages but I stick to boring for work.


py2many does python-esque to both Go and Rust.

The larger problem is building an ecosystem and a stdlib that's written in python, not C. Use ffi or similar instead of C-API.


For a second I thought we'd seen a lot of the universe, the HDF being 1/5th the radius away and on googling Earandel is 2/3rds the radius... of the known universe.

"According to the theory of cosmic inflation initially introduced by Alan Guth and D. Kazanas, if it is assumed that inflation began about 10−37 seconds after the Big Bang and that the pre-inflation size of the universe was approximately equal to the speed of light times its age, that would suggest that at present the entire universe's size is at least 1.5×1034 light-years—at least 3×10^23 times the radius of the observable universe."

So, if true, all those metrics of atoms, stars, planets in the known universe are multiplied by 10^23.

Even if intelligent life were rare enough to only appear once per knowable universe, there could be 10^23 different intelligent species - single planet to galaxy spanning empires - that would probably never meet another intelligent species (except those with the same ancestors).


> if true, all those metrics ... are multiplied by 10^23

Considering the relation of radius to volume, shouldn't we add a meager 3 to make the exponent a total of 26? (Assuming of course that the universe is just a three dimensional volume. :D)


Wow totally forgot that… but wouldn’t it be (10^23)^3 = 10^69!?


You're right, I forgot my exponentiation rules: https://mathinsight.org/exponentiation_basic_rules#power_pow...


nice...


The problem is in words "at present". There's no some global timeline to say "present". The time in far away places just didn't happen yet.

UPDATE: minor consideration -- time can flow at different speeds (e.g. gravitational wells). That's probably doesn't matter to our discussion, but just another argument against "at present".


Just because you dont see it, doesnt mean it didnt happen. The light the sun emits will only be seen by us ~8min later, but it's still being emitted right now.


Sort of.

The concept of simultaneity is mind-bending when you really dig into it [0]. The upshot is that the hard problem of synchronizing distributed systems is a problem of fundamental physics, rather than simply the capabilities of any given developer. Its always nice to know that the reason you haven't met some specification given to you by a non-technical user representative is because meeting that specification violates the known laws of physics.

0. https://en.wikipedia.org/wiki/Relativity_of_simultaneity


No, there's a mistake in the statement.

Counter-example -- what's happening _right now_ at a distance of 20 billion light-years from us?

Or another question -- what happened 1 hour _before_ the Big Bang?

Both questions are already invalid by themselves.

UPDATE: I should've tried to answer my questions to show what I mean:

1. At 20B ly from us there's no space nor time to talk about. Physicists talk in formulas, and I suspect if I knew how, I just wouldn't be able to come up with a formula to formulate my question.

2. "before" the Big Bang there was no time itself to say "before".

In other words, we can only reason about, or imagine, reason about things, within our light cones. Outside light cone questions become invalid to ask.


Can't we imagine a bag of clocks at the Big Bang origin that were synchronized and allowed to travel in all directions along with various sections of the ejecta, including one on Earth? One could imagine events that happen at the same clock reading as ours in all the different parts of the visible and non-visible universe.


The effects of relativity cause that thought experiment to fall apart quickly. I build two clocks, and send one to alpha centauri and back in a spaceship. When the travelling clock gets back to Earth it will be showing a different time (because of time dilation during acceleration). What does "the same clock reading" mean then?


Yes of course they won't show the same time if you bring them back. But the point is to argue that it is possible for there to be a "right now," as defined by what the traveling clock shows, outside of our light cone.


It's totally possible that such clocks could then touch each other in one place, while both showing widely different readings. Which one is right then?

PS: by the way, Poincare proved that there's no way to synchronize clocks perfectly, only up to a some margin.


> the pre-inflation size of the universe was approximately equal to the speed of light times its age

What is the basis of this assumption? Why should the universe be (initially) expanding at the speed of light?


> Even if intelligent life were rare enough to only appear once per knowable universe, there could be 10^23 different intelligent species

The search space of complex organic molecules grows exponentially with size. All that difference creates is some marginal space between molecules with hundreds of monomers and hundreds of monomers + some 4 or 8 where life could fit that.

Your revision of 10^70 makes it a little bit more believable. But I wouldn't expect at all that to happen.


Interesting, I thought it was only Guth that introduced inflation.


Ironic that this has been downvoted a bunch


haha it is what it is, I'll figure out how to better explain what I mean next time.

Putting this one down to "words are lossy compression for thoughts"


Huge potential, it's annoying in a way that gets me to keep playing but maybe slightly too annoying- I have a couple suggestions that would change the gameplay a lot so feel free to ignore:

Attacks that lose when they are exactly 1 letter longer- if winning by 1 was too powerful, these could draw/stalemate, letting you fortify your position and potentially attack a turn later (giving your opponent a chance to fortify their position). It feels a bit cheap to not be able to spell out a word because the opponent blocks with a shorter word.

I also find myself trying to plan words but never getting the right letters - maybe you could add a view of the next 3/5 incoming letters to reduce the rng-ness and allow more planning.

Again, great job, I look forward to see how it progresses!


Interesting suggestions, and thanks for the kind words! Definitely things to ponder :)


Yeah I'd agree, we learn addition/multiplication/etc as processes of smaller problems. If you gave an LLM a prompting framework to do addition I'm sure the results would be better (add the units, add the tens,

Food for thought- would a savant use the same process? Or have they somehow recruited more of their brain to where they can memorize much larger problems.


So first of all, prompting and re-prompting an LLM is basically forcing it to deduce rather than induce; using millions of gates to get from 1+1 to 1+2. That's what our brain does, too (uses millions of gates for dumb stuff), but we designed computers to do that using 4 bits, so it's ironic that we're now trying to write scripts to force something with 60 billion parameters to do the same thing.

I think savants usually solve problems in the most deductive way, using reasoning that leads to other reasoning. I went to an elementary school in the 80s where more than half the kids would now be labeled autistic... some got into math programs at colleges by the age of 12. I believe it's all pure reasoning, not like some magical calculator that spat out answers they didn't understand the reasons for.

[edit] If you meant: Do savants solve problems by recursively breaking problems into smaller and smaller problems, then yes. But the breaking-apart-of-problems is actually the hard problem, not the solving.


GPT?


I'd assume it's more about redundancy - 1/18 failed motors vs 1/4 would be much safer to land. Also means the replacement motors would be cheaper (individually). Like the starship booster.


And having more redundancy presumably means you can build each one cheaper and don't have to inspect them as much.

Smallish helicopters aren't that expensive. Maintenance and fuel are though. Going electric already helps with the fuel, going for many motors helps with the maintenance.


Helicopters are lucky to get 10 mpg.

The range of an electric helicopter is going to make it impractical, and I doubt that's going to change any time soon.

Gas stores energy in 100x less weight than lithium ion. Electric motors are 2x more efficient.

You're still going to need space and weight for ~50x more battery, which is hard to come by...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: