Hacker Newsnew | past | comments | ask | show | jobs | submit | innocentoldguy's commentslogin

I feel sorry for this woman. Meta did this to me because they're discriminatory dicks, so I know how she felt. Fortunately, I have a tremendous amount of family support.


I have the same question. I've been in the software industry since the early 90s and I've seen the "static types are the best thing since sex" fad fade in and out repeatedly during that time.

Having used plenty of strongly-typed and dynamically-typed languages, I really can't say strong typing has had any effect on me whatsoever. I honestly couldn't care less about it. I also can't remember ever having a type-related bug in my code. Perhaps I have an easier time remembering what my types are than others do. Who knows?


> In Japanese, an E column kana followed by I sometimes makes a long E, like in 先生 (sen + sei -> sensē).

While it is sometimes difficult to discern the combined E and I sound, especially for non-native speakers, the word 先生 (sensei) is technically pronounced "sensei" and should be spelled that way to distinguish it from words with long E sounds, such as ええ (ee) and お姉さん (oneesan). Similarly, the OU in 東京 (toukyou) and the OO in 大きな (ookina) are different and should be spelled differently. I hope this helps.

EDIT: Added a comma.


Sure, and in a Japanese song, "sensei" can yield four beats or notes SE/N/SE/I.

But spelling out and singing aren't normal speech. Spelling/singing can break apart diphthongs, like NAI becomes NA-I.

生 is not written with い due to the /e:/ having a different sound from that one in from おねえさん. It does not (when you aren't spelling). It is written the way it is for ancient historic reasons.

> Similarly, the OU in 東京 (toukyou) and the OO in 大きな (ookina) are different

No, they are't.

> I hope this helps.

こう言うバカな戯言は少しも誰にも役に立つはずないんだぜ。


We are talking about writing/spelling, aren't we?

Why would you want to confuse the hell out of those learning Japanese by spelling せんせい (sensei) using an E with a macron, a la "sensē," when that is not at all how you spell it or type in phonetically in an IME? Having a one-to-one romanization for each Hiragana phonetic is far more logical for learners, who are essentially the target of romanized Japanese, than creating a Hooked on Phonics version that is completely disconnected from writing reality.

I also think your comment, written in Japanese, saying, "This stupid nonsense isn't going to be of any use to anyone," is both ignorant and uncalled for.


In plain-text romanization, the standard and expected spelling is “sensei.” That’s the formal, conventional representation, especially for typing and learning.

Phonetically, in natural speech, the vowel often compresses toward a long /e/ sound, so you may hear something closer to sense or sensee depending on context and speaker.

In stylistic writing (e.g. light novels or dialogue), you might occasionally see phonetic renderings to reflect speech, but in formal or instructional contexts, “sensei” remains the correct and expected form.

In short:

• Orthography: sensei

• Phonetics: can vary in actual speech

• Stylistic writing: sometimes bends toward pronunciation

Different layers, different purposes.

I think this may mostly be a case of people talking past each other.

One side is focusing on orthographic convention (how it’s written and typed), the other on phonetic realization (how it’s actually pronounced in speech).

Those aren’t contradictory claims — they’re just different layers of the same thing.


Hi, ursAxZA. Yes, you're describing an "elision," which is where speakers drop or blur sounds together to make speech more fluid, like the way some people say, "Sup?" when they mean, "What's up?" or replace the T with a glottal stop in the word "mountain," as they do in Utah.

I wholeheartedly agree that it is fine to write things like "Sup?" when appropriate, such as dialogue in a novel. You see this all the time in Japanese TV, books, magazines, manga, etc. However, I disagree that elisions should dictate how we spell words in regular written communication, especially when discussing a tool meant to help non-native Japanese speakers learn the language. And as the parent poster pointed out, when singing, you would sing "se n se i" rather than "se n se e." The same is true of haiku and other instances where the morae (linguistic beats similar to syllables in English) are clearly enunciated.

As I said, sensei is technically four morae and different than "sensē," and, in my opinion, should remain that way in Romaji, it being a writing system and one method for inputting Japanese text.

Thanks for the respectful conversation. I appreciate the points you brought up.


Thanks — and yes, I think we’re essentially aligned now.

Once we separate the layers — orthography, pronunciation, and stylistic rendering — the friction mostly disappears.

Romanization is a writing system with its own conventions; speech naturally undergoes reductions and elisions; and creative writing sometimes pulls closer to the spoken register.

Different layers, different functions — and the confusion only arises when they’re collapsed into one.

Appreciate the thoughtful discussion.


That's right. That ē thing was a pretty stupid gaffe I made.


Calling out your own mistake takes toughness.

You owned it — that matters.


No worries, and I forgive you for the sardonic Japanese. I wish you the best.


> E with a macron, a la "sensē,"

Sorry, yes. That is my mistake. Hepburn doesn't use any such ē notation. Hepburn preserves えい and ええ as "ei" and "ee", conflating only "ou" and "oo" into ō (when they appear in a combination that denotes the long o:).


Some modern adaptations of his transcription do, however. E.g. Modern Japanese Grammar: A Practical Guide uses the transcription “sensee” (they consistently don’t use macrons in this book: e.g. they use oo for ō, etc.).

Hepburn didn’t write “sensē” himself because it 1880s it was still pronounced “ei”, not “ē”. If it were pronounced like it’s pronounced nowadays, you can bet he’d spell it with ē.


sugē


> Having a one-to-one romanization for each Hiragana phonetic is far more logical for learners

It depends on the learner’s (and textbook author’s) goals. Sometimes, having a phonetic transcription of the more common pronunciation is a more important consideration.

Historically, Hepburn’s transcription pre-dates Japanese orthographic reform. He was writing “kyō” back when it was spelled けふ. Having one-to-one correspondence to kana was not a goal.

So writing sensē is kinda on-brand (even if Hepburn didn’t write like this, because in his times it still wasn’t pronounced with long e).


I think most learners probably only pick up maybe 50 words before switching from romaji to kana anyway, so in the grand scheme of things the romanization's correspondence to the kana orthography isn't that important.


I don’t mean to minimize the huge effort by the Gleam team; however, Elixir cannot become Gleam without breaking OTP/BEAM in the same ways Gleam does. As it stands now, Elixir is the superior language between the two, if using the full Erlang VM is your goal.


I use many of the otp functions in gleam on thr regular, what functionality cant i call?

Gleam can call any erlang function, and can somewhat handle the idc types. [ im sure it has another name ].

Did i miss something that gleam fails on, because this is one of my concerns.


- No state machine behaviours. Gleam cannot do gen_statem.

- Limited OTP system messages. Gleam doesn't yet support all OTP system messages, so some OTP debugging messages are discarded by Gleam.

- Gleam doesn't have an equivalent of gen_event to handle event handlers.

- Gleam doesn't support DynamicSupervisor or the :simple_one_for_one for dynamically starting children at runtime.


I didn't know about the statem limitation, I have howerver worked around it with gen server like wrapper, that way all state transitions were handled with gleams type system.

I have been meaning to ask about that on the discord but its one of the ten thousand things on my backlog.

Maybe i could write a gen_event equivalent.. I have some code which does very similar things.

Thank you for taking the time to respond.


You're welcome.

I'm sure at some point, Gleam will figure it all out.


And, he delivers the line with such perfection.


I am yet to see Michael Caine fail at delivering his lines perfectly.


I find Elixir and Erlang easier, but I'm still a neophyte with Rust, so I may feel differently in a year.


I worked for a company writing Elixir code several years ago. Prior to my arrival, the ignorant architect had deployed Elixir in a way that broke the BEAM (which he viewed as "old and deprecated"). Furthermore, one of the "staff" engineers—instead of using private functions as they're intended—created a pattern of SomePublicModule and SomePublicModule.Private, where he placed all the "private" functions in the SomePublicModule.Private module as public functions so that he could "test them."

I tried almost in vain to fix these two ridiculous decisions, but the company refused to let code fixes through the review process if they touched "well-established, stable code that has been thoroughly tested." After being there for a couple of years, the only thing I was able to fight through and fix was the BEAM issue, which ultimately cost me my job.

My point in all this is that, at least sometimes, it isn't good engineers writing silly code, but rather a combination of incompetent/ignorant engineers making stupid decisions, and company policies that prevent these terrible decisions from ever being fixed, so good engineers have no choice but to write bad code to compensate for the other bad code that was already cemented in place.


> had deployed Elixir in a way that broke the BEAM (which he viewed as "old and deprecated")

I'd love to hear more about this!

> instead of using private functions as they're intended—created a pattern of SomePublicModule and SomePublicModule.Private, where he placed all the "private" functions in the SomePublicModule.Private module as public functions so that he could "test them."

Yeah, this is weird; you can just put your tests in the PublicModule. Or you can just solve this by not testing your private code ;)


> I'd love to hear more about this!

He deployed our applications using Kubernetes and refused to implement libcluster. There was something else, too, but I can't recall what it was. It was seven years ago.

> Yeah, this is weird...

I kept telling this developer that you're supposed to test your private functions through your public interfaces, not expose your private functions and hope nobody uses them (which they did), but that fell on deaf ears. He was also a fan of defdeligate and used it EVERYWHERE. Working with that codebase was so annoying.


Wouldn't you just `cast` instead of `call` if you thought this was going to be an issue?


The issue isn't that OTP isn't a priority for Gleam, but rather that it doesn't work with the static typing Gleam is implementing. This is why they've had to reimplement their own OTP functionality in gleam_otp. Even then, gleam_otp has some limitations, like being unable to support all of OTP's messages, named processes, etc. gleam_otp is also considered experimental at this point.


Having Erlang-style OTP support (for the most part) is very doable, I've written my own OTP layer instead of the pretty shoddy stuff Gleam ships with. It's not really that challenging of a problem and you can get stuff like typed processes (`Pid(message_type)`, i.e. we can only send `message_type` messages to this process), etc. out of it very easily.

This idea that static typing is such a massive issue for OTP style servers and messaging is a very persistent myth, to be honest; I've created thin layers on top of OTP for both `purerl` (PureScript compiled to Erlang) and Gleam that end up with both type-safe interfaces (we can only send the right messages to the processes) and are type-safe internally (we can only write the process in a type-safe way based on its state and message types).


I wholeheartedly agree with you that gleam_otp is janky. Still, actor message passing is only part of the picture. Here are some issues that make static typing difficult in OTP:

• OTP processes communicate via the actor model by sending messages of any type. Each actor is responsible for pattern-matching the incoming message and handling it (or not) based on its type. To implement static typing, you need to know at compile time what type of message an actor can receive, what type it will send back, and how to verify this at compile time.

• OTP's GenServer behaviour uses callbacks that can return various types, depending on runtime conditions. Static typing would require that you predefine all return types for all callbacks, handle type-safe state management, and provide compile-time guarantees when handling these myriad types.

• OTP supervisors manage child processes dynamically, which could be of any type. To implement static typing, you would need to know and define the types of all supervised processes, know how they are going to interact with each other, and implement type-safe restart strategies for each type.

These and other design roadblocks may be why Gleam chose to implement primitives, like statically typed actors, instead of GenServer, GenStage, GenEvent, and other specialized OTP behaviours, full supervisor functionality, DynamicSupervisor, and OTP's Registry, Agent, Task, etc.

OTP and BEAM are Erlang and Elixir's killer features, and have been battle-tested in some of the most demanding environments for decades. I can't see the logic in ditching them or cobbling together a lesser, unproven version of them to gain something as mundane as static typing.

EDIT: I completely missed the word "actor" as the second word in my second sentence, so I added it.


I suppose I was unclear. It is OTP-style `gen_server` processes that I'm talking about.

> OTP processes communicate via the actor model by sending messages of any type. Each actor is responsible for pattern-matching the incoming message and handling it (or not) based on its type. To implement static typing, you need to know at compile time what type of message an actor can receive, what type it will send back, and how to verify this at compile time.

This is trivial, your `start` function can simply take a function that says which type of message you can receive. Better yet, you split it up in `handle_cast` (which has a well known set of valid return values, you type that as `incomingCastType -> gen_server.CastReturn`) and deal with the rest with interface functions just as you would in normal Erlang usage (i.e. `get_user_preferences(user_preference_process_pid) -> UserPreferences` at the top level of the server).

Here is an example of a process I threw together having never used Gleam before. The underlying `gen_server` library is my own as well, as well as the FFI code (Erlang code) that backs it. My point with posting this is mostly that all of the parts of the server, i.e. what you define what you define a server, are type safe in the type of way that people claim is somehow hard:

    import gleam/option
    import otp/gen_server
    import otp/types.{type Pid}
    
    pub type State {
      State(count: Int, initial_count: Int)
    }
    
    pub type Message {
      Increment(Int)
      Decrement(Int)
      Reset
    }
    
    pub fn start(initial_count: Int) -> Result(Pid(Message), Nil) {
      let spec =
        gen_server.GenServerStartSpec(
          handle_cast:,
          init:,
          name: option.Some(gen_server.GlobalName("counter")),
        )
      gen_server.start(spec, initial_count)
    }
    
    fn name() -> gen_server.ProcessReference(Message, String) {
      gen_server.ByGlobal("counter")
    }
    
    pub fn count() -> Result(Int, Nil) {
      gen_server.call(name(), fn(state: State) -> gen_server.CallResult(State, Int) {
        gen_server.CallOk(new_state: state, reply: state.count)
      })
    }
    
    pub fn increment(count: Int) -> Nil {
      gen_server.cast(name(), Increment(count))
    }
    
    pub fn decrement(count: Int) -> Nil {
      gen_server.cast(name(), Decrement(count))
    }
    
    pub fn reset() -> Nil {
      gen_server.cast(name(), Reset)
    }
    
    pub fn init(initial_count: Int) -> State {
      State(count: initial_count, initial_count:)
    }
    
    pub fn handle_cast(
      message: Message,
      state: State,
    ) -> gen_server.CastResult(State) {
      case message {
        Increment(count) ->
          gen_server.CastOk(State(..state, count: state.count + count))
    
        Decrement(count) ->
          gen_server.CastOk(State(..state, count: state.count - count))
    
        Reset -> gen_server.CastOk(State(..state, count: state.initial_count))
      }
    }

It's not nearly as big of an issue as people make it out to be; most of the expected behaviors are exactly that: `behaviour`s, and they're not nearly as dynamic as people make them seem. Gleam itself maps custom types very cleanly to tagged tuples (`ThingHere("hello")` maps to `{thing_here, <<"hello">>}`, and so on) so there is no real big issue with mapping a lot of the known and useful return types and so on.


I read the code but I'm not sure I understood all of it (I'm familiar with Elixir, not with Gleam).

For normal matters I do believe that your approach works but (start returns the pid of the server, right?) what is it going to happen if something, probably a module written in Elixir or Erlang that wants to prove a point, sends a message of an unsupported type to that pid? I don't think the compiler can prevent that. It's going to crash at runtime or have to handle the unmatched type and return a not implemented sort of error.

It's similar to static typing a JSON API, then receiving an odd message from the server or from the client, because the remote party cannot be controlled.


> [...] start returns the pid of the server, right?

Yes, `start` is the part you would stick in a supervision tree, essentially. We start the server so that it can be reached later with the interface functions.

> [...] probably a module written in Elixir or Erlang that wants to prove a point, sends a message of an unsupported type to that pid? I don't think the compiler can prevent that. It's going to crash at runtime or have to handle the unmatched type and return a not implemented sort of error.

Yes, this is already the default behavior of a `gen_server` and is fine, IMO. As a general guideline I would advise against trying to fix errors caused by type-unsafe languages; there is no productive (i.e. long-term fruitful) way to fix a fundamentally unsafe interface (Erlang/Elixir code), the best recourse you have is to write as much code you can in the safe one instead.

Erlang, in Gleam code, is essentially a layer where you put the code that does the fundamentals and then you use the foreign function interface (FFI) to tell Gleam that those functions can be called with so and so types, and it does the type checking. This means that once you travel into Erlang code all bets are off. It's really no different to saying that a certain C function can call assembly code.

    pub type ProcessReference(message, term) {
      ByPid(Pid(message))
      ByGlobal(term)
      ByLocal(Atom)
    }
    
    @external(erlang, "otp_server", "call")
    fn gen_server_call(
      pid: ProcessReference(message, term),
      call_func: CallFunc(state, reply),
    ) -> Result(reply, Nil)
And the corresponding Erlang code:

    call({by_pid, Pid}, CallFunc) ->
        {ok, gen_server:call(Pid, {call_by_func, CallFunc})};
    call({by_global, Name}, CallFunc) ->
        {ok, gen_server:call({global, Name}, {call_by_func, CallFunc})};
    call({by_local, Name}, CallFunc) ->
        {ok, gen_server:call(Name, {call_by_func, CallFunc})}.


In my opinion, Elixir and Phoenix will give you a better experience with BEAM and OTP, excellent tooling, a more mature ecosystem, and one of the best web frameworks ever to exist. I think Gleam is cool, but I can't see trading these benefits in for static typing.

To be fair, I can't think of anything I care less about than static typing, so please keep that in mind when entertaining my opinion.


I also preferred dynamic typing, until my complex rails app grew to the point I didn't dare to do any refactoring. But I didn't switch opinion until I discovered ML type systems, which really allow for fearless refactoring. At occasion there's some battling to satisfy the typesystem, but even with that I'm more productive once the app grows in complexity.

I thought I'd share my experience, not trying to convince anyone ; - )


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: