Hacker News new | comments | ask | show | jobs | submit login
Go 2, here we come (golang.org)
816 points by mdelias 52 days ago | hide | past | web | favorite | 516 comments



I’m hoping that https://github.com/golang/go/issues/19623 will come through, and we’ll get a native “true integer” type (and hopefully a rational one as well, though maybe this is pushing it a bit). This is really something that should be implemented at the language level, so that “int” can become a true integer, yet still remain efficient in many cases.

It is bizarre to me that languages boasting built-in language-level data structures like lists, hashtables, etc are content to just leave us with the bare minimum support of numbers, being basically whatever the hardware thinks a number is. The semantics of integers and fractions are perfect, and everybody already knows them. On the other hand, overflows in int32’s are weird, and if your idea of a fraction is a floating-point number, then you can never have something like (5/3)*6 evaluate to 10 exactly.

To be clear, I think fixed-width integers and floating-point numbers have their place, I just see no reason why they should be the default.


I like how Haskell does it. When you simply write a number like "3" it will infer the type to be "Num a => a" which means it could be any type you have loaded that defines the functions in the typeclass Num:

    class Num a where
      (+) :: a -> a -> a
      (-) :: a -> a -> a
      (*) :: a -> a -> a
      negate :: a -> a
      abs :: a -> a
      signum :: a -> a
      fromInteger :: Integer -> a
"3.0" would be "Fractional a => a" which means it defines the functions in the typeclass Fractional (in addition to being a Num):

    class Num a => Fractional a where
      (/) :: a -> a -> a
      recip :: a -> a
      fromRational :: Rational -> a
Depending on how you use the value, the type would be further refined in compile-time. For example, if you did `recip 3`, 3 couldn't be any Num anymore, it would have to be some Fractional.

Regarding (5/3)*6, it does equal 10 using floating point. A better example would be how 0.1 + 0.2 is not equal to 0.3. We can see this:

    ghci> 0.1 + 0.2 == (0.3 :: Float)
    False
If we specified that we're working with Rational values, which are also Fractional a => a, and which are defined as a pair of integers, one representing a numerator and the other a denominator, roughly like so:

    type Rational = Ratio Integer
    data Ratio a = a :% a
then we can see that 0.1 + 0.2 does equal to 0.3:

    ghci> 0.1 + 0.2 == (0.3 :: Rational)
    True
It's pretty cool that Haskell lets you define new types of numbers and use them like any other. While you can transparently support hardware type numbers, represented by types like Int, Float, Double, Word, Word8, Word16, Word32, Word64, you also have transparent support for arbitrary precision Integer and Rational. Writing your functions to work with the typeclasses like Num and Fractional lets your functions work with any of these types and future types defined.


This is a great system for tying in new number types with existing ones, but the lack of explicitness about casting exact types (Integer, Rational) to inexact types (Int, Float) has led to many bugs and confusions for me. Coupled with the fact that so many standard functions (like length, for example) want to return an Int rather than an Integer, it just leads to me spewing ((fromIntegral ___)::Integer) all over my code.

I love all of the automatic casting between exact types, but happily implicitly casting to inexact types is (in my opinion) a big mistake.


> [...] but the lack of explicitness about casting exact types (Integer, Rational) to inexact types (Int, Float) has led to many bugs and confusions for me.

What do you mean? As far as I'm aware, you always have to explicitly convert exact types to inexact types in Haskell (using "fromIntegral" and "realToFrac").

> Coupled with the fact that so many standard functions (like length, for example) want to return an Int rather than an Integer, it just leads to me spewing ((fromIntegral ___)::Integer) all over my code.

Indeed this is rather annoying. It's this way because of legacy reasons (see https://www.reddit.com/r/haskell/comments/60u10b/do_function...).


If you don't like the automatic cast, you can always manually cast inline by doing (3 :: Int). I've used this a number of times when doing numeric stuff in Haskell.


I do know how to manually cast, although in my case I'm usually casting to Integer. I was arguing that a cast from Num to Float, Double, Int etc is a lossy operation, and breaks the semantics of arithmetic, and so should not be done implicitly.


> but the lack of explicitness about casting exact types (Integer, Rational) to inexact types (Int, Float) has led to many bugs and confusions for me.

> I love all of the automatic casting between exact types, but happily implicitly casting to inexact types is (in my opinion) a big mistake.

> I was arguing that a cast from Num to Float, Double, Int etc is a lossy operation, and breaks the semantics of arithmetic, and so should not be done implicitly

It's not really casting; it's just type inference. You can't have a Num and use it as a Rational in one place and a Float in another (unless you lift the monomorphism restriction), and you can't add a Rational and a Float without explicitly casting one to the other by the use of a function like fromRational. Like so, real casting is very much explicit. I feel like you're technically arguing against representing Floats with decimal numbers in code, but I don't think you really mean that.

EDIT: At the risk of stating something that you might already know, (x :: Int) in Haskell is not a cast like (int)x in C. (x :: Int) simply further restricts the list of possible concrete types that x could be. If the code context implies that x is a Num a => a, that means it could one of an Int, a Float, etc. Doing (x :: Int) simply says to restrict those possibilities to only Int. If x were a Float, and you did x :: Int, that'd be a type error because it was never possible for it to be an Int, only a Float. You'd need to cast by using a function like floor, ceiling, or round.

EDIT 2: To further explain this, C's (int)x is a run-time operation that converts whatever x is to an int, while Haskell's x :: Int is a compile-time operation that states that x can only ever be an Int and never something else. At the end of compiling, Haskell needs everything to be concrete types. Num a => a is not concrete because it can be many types and if Haskell can't resolve it to a single concrete type implicitly by context or explicitly by signatures like x :: Int, then it has to resort to defaulting rules or raise a compile-time error informing the programmer of the type ambiguity.

EDIT 3: On:

> Coupled with the fact that so many standard functions (like length, for example) want to return an Int rather than an Integer, it just leads to me spewing ((fromIntegral ___)::Integer) all over my code.

I agree it would be nice for length to return a Num a => a instead of an Int, but it's nice that the reason is to maintain stability and limit code breakage from the times before Num existed. The Haskell community seems to really care about that. They did add genericLength which does return a Num a => a, though. There's other generic functions added to Data.List.


Makes no sense to me to redefine existing integer types. Why not introduce a primitive type "num" for arbitrary precision rational numbers instead?

As a long-time Racket and Scheme user, I'd say that arbitrary precision numbers as default bring more disadvantages than advantages. As an option with syntax support Yes, but not as a default. it just makes it harder to port all kinds of code that relies on modulo arithmetics.


> it just makes it harder to port all kinds of code that relies on modulo arithmetics.

If you're relying on modulo arithmetic, you should be using a fixed size integer anyways, not one that varies based on the architecture. Go has fixed size integer types available, and modulo arithmetic should be using those, not "int".


I'm not sure how these two are related. Why would you mix modulo operations and fixed size? Modulo on an arbitrary bigint is perfectly valid and useful.


Modulo arithmetic refers to the property of fixed-size integers where they will predictably wraparound in expected ways if you overflow or underflow them. This wraparound is "free", so it used to be commonly used for cyclic high performance code. An arbitrary bigint is computationally more expensive, and it will never experience this effect, so it's a double whammy of irrelevance for bigints. You can manually do a modulo operation on a bigint, but that just makes them even more computationally expensive than a fixed size int, requires more code, and would be considered an unnecessary inconvenience by most people who care about modulo arithmetic.


Except cryptographers. They like modulo arithmetic, and 64 bits really doesn't cut it.


> Why not introduce a primitive type "num" for arbitrary precision rational numbers instead?

Rationals aren't supported natively by processors, so there's no real need to handle this as a primitive instead of letting people to use a library for it. Adding them to the base language just because some people would find it convenient would clash with Go's explicit minimalist philosophy.


This argument falls flat for me. Classes aren't supported natively by processors either, yet any number of OOP languages use them.

It's generally nice when you can do simple things with the language's built-in standard library. I tend to prefer languages with more powerful standard libraries because it means that you can more easily move across codebases since they'll all be the same. If commonly used data types like rationals, hashes, maps, strings, etc., aren't defined in the stdlib, then there'll be any number of different libraries to use them, and you'll have to potentially re-learn a lot of different unnecessary things when working on different codebases.


Yes, standard library. I understood the poster above you to suggest they shouldn't be a part of language by that's used by default, for sure though it would be convenient to have centralized implementation.


Exactly. Having a rational datatype as easily usable as float would make it easier to use as a default when you don't really need floats but just non integers. Which is often the case if you think about it. Automatic simplifications, literals (12.345122 is a rational, as well as 22/7)


It's like building in special handling for a string type, exactly one instead of a catalog of library options.

I'd say that would fit in quite well with the brand of minimalism that Go is representing, keep the language simple and provide builtins where more convenience is needed. The opposite kind of "minimalism" would be providing tools to allow libraries to define what other languages can only support as special builtins. C++ and Scala went down that road, they minimize builtins by empowering libraries to replace them. That's also a form of minimalism, but not the one chosen by Go.

Tangent: now I wonder wether Scala has inerited special handling for the (not special, but specially handled) string type it inherited from Java or if just reimplements those syntax extras in the standard library using implicits.


> I wonder wether Scala has inerited special handling for the (not special, but specially handled) string type

Scala uses Java's String exactly, but does define a few extra methods using its extension method functionality (called implicit class in Scala)

Templates like s"This is a string with a $variable", are part of the language, but `s` is a stdlib feature.

   xxx"First $a $b Second"
will be translated by the compiler to

   new StringContext(Array("First ", " ", " Second")).xxx(a, b)
The standard library defines `StringContext.s`, but there's no restriction. Several SQL Libraries for example define `StringContext.sql`, to allow you to safely and type-checked embed SQL queries directly inside the code.


"X isn't supported natively by processors, so there's no real need to handle this as a primitive instead of letting people to use a library for it" is a general-purpose argument to cherry pick which primitives you support and which ones you don't. Surely this isn't your actual reason you choose to oppose Rationals / BigIntegers as primitives while accepting hashmaps and arrays?


Arrays, slices, hashtables, and channels are not supported natively by the processor, but Go still has them as primitives. Why not also add a number type which behaves in a sane, easy-to-understand, useful way?


Scheme has an explicit minimalist philosophy, so much so that when R6RS was released with too many conveniences it fractured the language and community. Now there's two specs, R7RS-small and R7RS-big, and they're far less popular than R5RS.

Rationals are also the default number type.

Lists are the base type used throughout Scheme, and can be used to implement arrays and hashmaps and whatnot, so Scheme doesn't really need them, you can just use a library.

... That didn't stop the committee adding types to the spec. Like records, promises and other complex types.

What is useful and necessary to a language isn't defined by what the processor can natively do, or we'd only use registers, not arrays. And it isn't defined by a overbearing attitude towards one or more philosophies.


This is reminiscent of 'numeric tower' discussions from years ago.


I am misunderstanding or really go is using the ‘int’ type that can be 32 or 64 bit depending on the system that it runs on??? If this is the case I think that is crazy and I can’t think of any useful use case for it. If it’s not the case then please explain me what that proposal is really about...


"int" in Go the native index type for arrays. I consider it an error to use it for anything else, and since I've adopted that policy, I've had no particular problem with ints.

Probably a good linter idea in there somewhere.


I'm curious what the motivation was to use int instead of uint considering indexes cannot be negative in Go.


Despite most of my professional programming being in Go nowadays, I am extremely sympathetic to the Haskell/FP way of thinking about things, and trying to make invalid state unrepresentable.

However, having made the mistake a few times now of trying to use "unsigned ints" for things that I want to assert are never negative, I've learned the hard way not to do that. The problem is, there's always some bug you have in the program that will drive the uint "negative", wrapping over to a huge number. Unfortunately, since you get no warning when that happens, it tends to take longer to show up, and without a clear cutoff to say "ah, this is clearly invalid" it can be hard to even program detection code, whereas checking for "less than 0" can be unambiguous.

I'd loooove it if I could easily, cheaply, and universally (i.e., not just in Go, but across all my programming languages) turn on a behavior that says "if this int or uint under or overflows, throw an exception instead of trying to do whatever stupid thing you're going to do". I get the sense this is heading down the road where in 20 or 30 years, this is just going to be common sense. However, we are very early on the road yet from what I can see. Whenever I bring it up, even today, even in this sort of context, it still is generally negatively received. (But not uniformly. A few other people agree. I think momentum is with me. I'm not sure my programming career will live long enough to see it, though.)


Rust does this in debug builds by default, and you can switch it on in production if you'd like.


Swift does this with the default operators. There are alternative operators if you want to get the more classical C-style treatment.


> if this int or uint under or overflows, throw an exception

Overflow flags are supported on some architectures, so it would really just be a matter of checking that flag.


Except in Go, types are defined using modular 2s complement. Overflow is not considered a problem, it's a feature that code depends on.


The spec says:

> For signed integers, the operations +, -, , /, and << may legally overflow and the resulting value exists and is deterministically defined by …*

It does not go on to define the overflow values; that gets to be implementation defined.

This is in contrast to unsigned integers, which the spec does precisely define overflow for.


Using unsigned indexes would be fine if the program automatically throws an error on wrapping. Otherwise, it’s an accident waiting to happen, since you have to code all of your arithmetic keeping in mind the fact that the numbers are trying to blow up in your face.

If x is unsigned, then you can’t rearrange an inequality like x >= 4 into x - 4 >= 0, since the first inequality is only true sometimes, whereas the second is true always. This looks obvious, but bury it a little way into some nontrivial index arithmetic and boom.

Another more aesthetic reason is that indices can represent offsets into a structure relative to some other index, and in this case you really need negative numbers.


In general it is nicer to use signed numbers for indices. The actual index can't be negative, true, but differences between them can, and it means you don't have so many dubious sign conversions.


Because you want to represent errors. For example a find() function that returns the index of some element: you want to return -1 when the element doesn't exist.


Shouldn't idiomatic go return an error instead? Magic number ranges for errors always seemed a bit old-fashioned.


It's just an example... Another example would be you want to iterate your array in reverse. The normal `for (uint i=size-1; i >= 0; i--)` won't work.


That example doesn't work precisely because you're conflating the concept of the loop counter (which type represents an integer that needs to be able to go <0) with the array index accessor. There's no reason for these two things to be the same type, it's just a pattern that accidentally works because we allow arrays to be accessed using a type that represents negative numbers.

What would be a more compelling argument is to use int in the same way that python does - a negative accessor means offset from the end of the array, not the start. I'm not sure golang lets you do this though...?


Returning -1 instead of an error is abusing the type system. In C, there is no error type, so it might be excusable there, but it is inexcusable in Go, IMO.


> I'm curious what the motivation was to use int instead of uint considering indexes cannot be negative in Go.

I want to emphasize that I do not know the true answer to this question.

I think that indexes are int to play nicely with a decrementing loop counter. Doing:

    for i := len(n) - 1; i >= 0; i-- { fmt.Println(n[i]) }
requires i to be signed due to the last iteration.


I'm not sure about go, but in C this could be:

  for(size_t i = zn; i-- > 0 ;) printf("%s\n",n[i]);
(This is a large part of why unsigned over/underflow is well-defined behavior in C.)


In Go, i++/i-- is a statement, not an expression, so that doesn't work.

More generally, assignment is always a statement, and never an expression (e.g. you can't say a = b = c)


Crazy indeed. This was like lessons learned #2 from C, which later introduced the stdint types int32_t et. al.

A modern language with this type of confusing and platform dependent behavior is inexcusable. By all means, include a native data type as big as the processor can handle (size_t), but don't call it int which is the default data type most people will use out of pure laziness.


Which would you choose instead? To default to 32 bits (even on 64 bit systems) or to default to 64 bits (even on 32 bit systems).

Defaulting to 64 bit math on 32 bit systems will have a huge performance penalty on generally the slowest/oldest systems where this penalty is least desirable. It's not clear to me what the benefit to this would be for most programs.

Running 32 bit ints on 64 bit systems will cause problems handling large arrays on systems with a lot of memory. I think it also complicates generating efficient loop code, although C++ gets around this with signed integer overflow being "undefined behavior".

Of course Go has int64 and int32 types if you want to use them (and all conversions require explicit casting), but the default type for array lengths etc is the platform native type. But what is the better alternative?


I would use int and long like in C# for example, with no magic int that can be 32 or 64 bit depending on the platform to avoid any confusion.


And as a result of this, you can't have a 3GB byte array in C#. I think there needs to be a big benefit to make hard-coding a restriction like that into the language worth it.

Unfortunately I've read every comment in this HN thread (so far) and I haven't found any specific one mentioned, just people calling it "crazy" and "confusing" over and over.

Go is a language that has explicit pointers! And of course those can be 32 or 64 bits and change struct sizes, field offsets etc. There are certainly things about Go that are confusing but I've never thought this was one of them.

Just FYI Go has int8, int16, int32, int64, and int types. Only the 'int' type is the machine-native one. It's always been obvious to me that 'int' is the one without a fixed size, but I think this would be less obvious if the naming convention was different using short, int, long etc.


The subset of Go users who deploy on 32bit hardware in 2018 and care about performance at that level, is likely to be vanishingly small.


32 bit ARM is still very common.


What about Go in 32 bit ARM?


Even if it's OK to degrade the performance of a small minority of users, is there a big benefit to emulating 64 bit ints everywhere on 32 bit hardware? (I'm not saying there isn't a benefit, just that it's not clear to me what it is).

I don't know how many users are still on 32bit. Maybe low power network devices? Older mobile phones (Go isn't usually run on mobile, but it's possible...).

If at some point in the future 32 bit systems become totally irrelevant for Go users, maybe a release of Go could drop support for it altogether and make int an alias for int64, and save people some type casts. 32 bit systems probably aren't at that point yet though.


>I don't know how many users are still on 32bit. Maybe low power network devices? Older mobile phones (Go isn't usually run on mobile, but it's possible...).

Well, 16 bit would also be possible if someone took the effort. Perhaps it just shouldn't be encouraged.


By 'possible' I meant that it's currently supported by the Go toolchain.


Its not crazy, its defined by the system ABI, C, C++ has this too in that can be whatever depending on the ABI, hell char can be 32 bits.


It may be crazy but it's not exactly without precedent. Neither C nor C++ fix the sizes of the fundamental integer types, although for backward-compatibility reasons popular 64-bit platforms still have 32-bit `int` and even `long`. But yeah, there's a reason eg. Rust has no `int` and friends but `i32` etc. instead.


Rust has a `usize` which is 32 or 64 bits depending on the platform and also serves as the native index type for arrays and such.


Yeah, that makes sense because you need some type to span the address space and that depends on the size of the address space. That is what int is used for in go, though why they chose signed rather than unsigned is confusing. unsigned you would never have to do negative bounds checks and you can prove a lot of other useful things if you know your index is always positive.


I don't know how go does it, but in some languages negative indices count backwards from the rear end. E.G. myarr[-1] == myarr[myarr.length-1].


Right, 64-bit Windows has 32-bit int and long and 64-bit long long (LLP64) while 64-bit unixes generally have 32-bit int and 64-bit long/long long (LP64).


C only defines char, short, int has having a minimum size of 8, 16 and 32 bits.


Actually C defines the minimum size of "int" as 16 bits, because it defines the minimum range it should support as [-32768 .. 32767]

See https://en.wikipedia.org/wiki/C_data_types


Nit: [-32767 .. 32767] C permits ones' complement and sign-and-magnitude as well as two's complement.


no, int is 16 bits minimum


You are not crazy, and it's what makes this proposal even more appropriate than other languages. "int" is already a magic type.


Yeah, sounds crazy. It only worked for C for 40+ years...


Ignoring the basic int type and using a zoo of preprocessor defines instead is what has worked for C all that time.

But I still think that native types have their place. It can often be quite reasonable to accept serious limitations on narrow machines where the performance gains won by the trade-off are desperately needed, lift those limitations on wider machines and grant an enormous safety margin on the widest architectures. When you dive deeper, the values where this makes sense still tend to be "index-like", as GP stated, but they don't have to strictly be indexes (e.g. IDs, a wider machine will be capable of working on bigger datasets and therefore be more likely to exhaust a given width of IDs, all indexes are identifiers but not all identifiers are indexes)


> Ignoring the basic int type and using a zoo of preprocessor defines

My C is a bit rusty but why should #defines be used in this scenario? A typedef set in a config.h.in is more than enough to meet the requirements of this usecase.


Why do you need your own typedef? stdint.h is available, if you're using a compiler from the past couple of decades at least.


And what percentage of programs written in those 40 years are actually able to run on systems with more than one or two different memory layouts?


Why is that a problem? There's int8/16/32/64 for when you need to be specific.


I don't know how many times I've been writing a program using int, then suddenly I have to do some math on the index of a range (for example), and the math.* functions require int64. I start casting to int64, but things get infected and it spreads. Eventually I refactor everything to use int64 and wonder why int can't just be an alias for one of the others. Personally I would make it int64, since that is what the standard library thinks math should be done with.


Odd, I've rarely had this issue, and almost never need to do complicated math on the index of a range (maybe "multiply by two and add one" kind of stuff, but nothing from the "math" pkg). Also, note that the math.* functions all use float64, not in64.


Yes, you're right float64 in the math package. So I've definitely run into this with float vs float64.

As for the int vs int64, I definitely run into what I describe on Project Euler. Probably using a self referential map[int]int when I need to suddenly pull an int64 and everything gets infected.

I realize it's just a matter of convenience for me, but I personally have no downside to int == int64.


Meh I use C# and I rarely need an int bigger than 2147483647, and if I do I use a long which gets me up to 9223372036854775807, and if I need more than THAT then I'll use a math library of some sort.


I think the question though is that if “int” were an actual integer, would you still default to using int32/int64, and for what reason? There are valid uses for fixed-width types, but beyond domain-specific number crunching (crypto, image processing, etc) I would argue that the use of these types is technically incorrect, and not what people expect. Even experienced programmers will see “x > x + 1” and mentally replace that with “true”, even though if x is fixed-width, the value of that expression actually depends on x.


The problem with overflow isn't just needing to store numbers larger than 2 billion. Sometimes, intermediate values are larger than that even if the final result isn't.

Take averaging as a very simple example. Doing (a + b) / 2 will overflow if a and b are sufficiently large, even if the average will always fit in 32 bits. Things like this go unseen for years.


don't most people compute the mean iteratively when they need to do this? Sure, it goes unseen, but programmers on systems like this are expected to understand the math model and write tests.


Systems like what? My expectation is no, most people make mistakes.


I mean, in C and other low-level languages unlike python, it's generally assumed you have a basic understanding of the machine model and the consequences of arithmetic in limited types.

If most people make mistakes, they should self-select to languages which have properties that protect them from their ignorance. for example, I mostly program in Python using longs so I don't have to worry about overflow.


What kind of "long" are you referring to? Most programmers would think of a C long, int64, not a Python long, bigint.

Maybe people ought to self-select, but that would mean they'd need the training or experience to recognize what they don't know. It's often the most ignorant people that believe they have the most expertise.


I get you point. I'd probably go float/double precision for that task.


Then you can lose precision that you might actually need - a bug that's even more annoying to track down than straight overflow.


Yes indeed! But what is the alternative, especially if you are using approximations to transcendental constants?


There have been 1,543,530,695,158 milliseconds since epoch.


That would fit into a spacious Int64.

I typically wouldn't be passing around milliseconds since epoch as an raw number, but then that's C#, I guess you might do that in other languages for performance or out of necessity.


Why oh why do people want a value to have different bounds depending on which system it is used? This is a source of huge confusion and why people stick to uint8 and other precise types


It's a perfectly reasonable niche use case for achieving maximum performance.

My main issues with it are:

1. Like many other performance optimizations, it is a trade-off. You may be sacrificing easy maintenance or even introducing breakage (if the person using it doesn't understand the limitations and implications). So IMHO you should only do it if you can show through profiling, etc. that the gain is real and worth it.

2. As a matter of language ergonomics, it should never be something that anyone does unknowingly. You should have to explicitly ask for it. The type name should be something blindingly obviously like "native_int" or "word_int".


Even Swift who’s mantra is do nothing unexpected or implicit has a machine dependent (word sized) Int type.


You’re trading off security here. Users are going to shoot themselves in the foot assuming that int is this size or this size.


People have traditionally wanted to use the maximum sized type that can fit in a register.


More precisely: they wanted the integer type that was large enough and ran their code fastest.

On CPUs with 64-bit and 32-bit registers, that typically is the register-sized integer type. Machines with 16-bit registers are a bit of an edge case, as 16-bit integers may both be a lot faster on them then 32-bit integers and, in many cases, too small)

If go chose to use 32-bit integers as default, some users on 64-bit systems would complain that they couldn’t create, say, an array of 6 billion ints. If they chose 64-bit, some users on 32-bit systems would complain their loops were needlessly slow, and code bloated (why waste 4 bytes of almost every integer in a program?)

Not having generics makes this more of a problem. If go had generics, switching between integer sizes could be a matter of recompilation using a different compiler flag)

A sufficiently advanced compiler would help here, but go doesn’t have one (not exceptional), and doesn’t even aim to have one (compilation speed seems more of a goal than run-time speed)


> Not having generics makes this more of a problem. If go had generics, switching between integer sizes could be a matter of recompilation using a different compiler flag)

Isn't it already the case that you can switch between int sizes with compiler flags?

    var i int
That will be 32 bits if I compile it with GOOS=linux GOARCH=386 and will be 32 bits if I swap in amd64 instead of 386 there.

Is there any language where generics are useful for switching between int sizes? None come to mind offhand, so I don't quite understand what you mean.


Traditionally the register size was too small, it's very common to have numbers above 256 or 65536. It's not true anymore with 32 bits numbers and a limit of 2 billion.


On 64 bit systems you can have large arrays that are bigger than a 32 bit int. For example, it's nice to be able to load a >2GB file into a byte array.

For comparison, Java's int type is 32 bits on 64 bit systems, but then this limits the array size. It seems like a lesser evil to have a platform-native int type than to limit the size of arrays (especially since servers can be expected to increase their max memory going forward).

Of course, standardizing on 64 bit ints instead would require emulating them on 32 bit systems, when they can't support that much memory anyway (you could argue that 32 bit systems don't matter much anymore, but if you really don't care about ever running on them, then you can just treat 'int' as 64 bits and not variable size).


C# is the same, int is merely an alias for Int32 and the default Array type only holds 32bit.

If you really need to, it's easy to wrap this into your own BigArray-class holding a 32bit-array-of-32bit-arrays and overriding the []-operator with Int64 as argument.


It's a rational practice when your language allows casting pointers to integers. Since the size of the pointer itself will vary based on the underlying architecture, you need an integer type which will also vary based on the underlying architecture.


I feel you, but, that's uintptr. int is more useful as the "native type" for array/slice length/cap/indexing


Except length/cap/index are always positive, so I think int was a mistake here.


I think that the rationale for using int for array indices and lengths and so on is to avoid shooting yourself in the foot with wrap-around arithmetic (again, my thesis being that when 99% of people use integers, they expect them to behave like integers and not wrap)

Take for example code which finds consecutive differences in a list, something like

  for i := 0; i < len(list) - 1; i++ {
    print(list[i+1] - list[i])
  }
This code looks fine and reasonable, and we even avoided the off-by-one error at the end. However, if len(list) is a uint and we run this on an empty list, len(list) - 1 is then 2^32 or something, and the program explodes.

So this is the rationale for using signed integers rather than unsigned integers. I think the language semantics should go further, and use true integers pretty much everywhere. Note that the compiler could still easily see that the index variable in the loop above is bounded, and use a machine word under-the-hood. But the programmer only ever has to think about well-behaved integers.


I actually agree. I was responding to someone who dismissed the entire concept.


no, I get that, but int is not for pointer<->integer conversions. That is very specifically a separate type:

    uintptr  an unsigned integer large enough to store the uninterpreted bits of a pointer value
https://golang.org/ref/spec#Numeric_types


Sure, but did I imply it was? I’m genuinely confused.


> language allows casting pointers to integers

The original thread was about converting int to be an arbitrary precision type: "so that 'int' can become a true integer". One would imagine that, to preserve the ability to do pointer <-> non-pointer number, they would just leave uintptr alone, and so basically ignore your concerns, whereas int as the "native type" for slice indexes and the like is a more likely reason to prevent messing with "int" itself


Okay, that's the source of the confusion. I was not replying to that sentiment, which is a nuanced discussion on what an int should be. I was replying to the_clarence's categorical dismissal of variable width integer types.


This was one thing COBOL got right: built in support for a DECIMAL data type.


Fixed – Fixed-place decimal math library for Go (github.com) https://news.ycombinator.com/item?id=18559625


[flagged]


It’s been a while since I COBOLed, but if I recall, the COBOL decimal is similar to Currency types in languages like C#. (Maybe I’m misremembering.) If I’m right, though, it’s not a floating point number. It’s currency properly handled via integer math under the hood.


In C#, there's a "decimal" type. Which is exactly that - decimal floating-point. However, one catch about it is that it doesn't allow for the exponent that would place the decimal point somewhere outside of the representable digits of the number (i.e. unlike floats, the difference between any two decimals is never more than 1).

But it's still plenty useful, because it allows to accurately represent decimal fractional numbers, which is exactly what's needed in many domains, since humans work with decimal fractional numbers. Representing money would be one particular example.


Decimal floating point is actually an established thing and part of the current IEEE 754 spec. I'm not too familiar with COBOL but pretty much any modern language will offer it through a library or compiler extension. Recent IBM Power processors even have hardware support.

https://en.wikipedia.org/wiki/Decimal_floating_point


Both you and the parent are correct. Decimal floating point is a long-established thing, but the vast majority of "Currency" or "Decimal" data types in modern languages are arbitrary or fixed precision (to some configurable precision) which uses integer math under the hood.


Given the age of COBOL and the age of IEEE floating points, it would be very unlikely if it used a floating point number.

(And yes, I know that Conrad Zuse invented floating point over a century ago ;))


Probably implementation dependent, but the COBOL I used that put me through college used some variation of BCD (binary coded decimal).


Cobol uses fixed point decimal


That would be quite a fundamental change. At the moment _int_ and _uint_ have certain semantics that, if changed, would surely break many applications and libraries that rely on the current semantics. I am finding it hard to think of a more sweeping and drastic change to the core of a language.

Having said that, I'm by no means a Golang expert – this is the comment of an outsider looking in. I get that the proposer is Rob Pike and obviously Mr. Pike is god-like so what is it I am missing? Is Go 2 meant to break everything? That's like, wow.

Any low-level hardware bit-banging code is going to have to be audited or break in subtle ways, will it not? Is Golang different from C and C++ and languages of that ilk that this sort of change is not that much of a problem? Help me out here folks, I'm genuinely perplexed.


If the change as originally proposed went through, it probably wouldn't affect most uses of bitbanging low level code because they are already using int32/int64 rather than int (which has different size on different platforms). You are correct that it could definitely break some programs though, and so with recent sensibilities veering away from making any non- backwards-compatible changes, an amended proposal would probably just introduce a new "vint" type (maybe removing the old int type at the same time if it is really ambitious)


If these are new datatypes then they would break nothing (ignoring the "int" default part).

> so what is it I am missing?

The Go team has been adamant about avoiding any changes that break existing code. The whole point of Go 2 is that at some point that might have to happen and they're trying to minimize that pain.


I've been thinking about this, and even though it's indeed a breaking change, it's one of those changes that can be easily toggled through a compiler flag. I cannot picture many libraries relying on the edge cases that Pike enumerated.


My (admittedly unsophisticated) mental model is that int32 and the like are simple primitives that can be stored in registers and use the processor's native ADD/MUL codes, while BigInt is going to be some boxed structure with its own implementation of arithmetic operations. For as common as number munging is (especially in somewhat lower level code that Go shoots for), it would carry a huge performance penalty, wouldn't it? Or are there more sophisticated ways of offering a "true integer" type that I'm not aware of?


I'd like this because it'd make certain classes of programming problems simpler.

Admittedly, not for things I build in my day-job but mostly for the kind of things you'd get when doing 'project euler' or 'leetcode'. Or more seasonal, the coming "Advent of Code".


I'm pretty excited by the idea of Go getting generics. This has always been my deal-breaker issue with Go and I'm glad that what appeared to be a disingenuous "let's pretend to be hunting for the truth until people go away" stance was actually really a hunt for the truth! Goes to show that you shouldn't make snarky snap judgments.

As for all the folks claiming they'll leave Go if it gets generics, it's faintly reminiscent of Mac fanboys claiming PowerPC chips were the Very Best right up until this was obviously not true. C++ generics are a PITA in many ways, but you can be insanely productive in the STL without having to hand-roll everything and with good type safety.

Despite the pain, I've been amazed at how easily you can build up some really complex data structures as pretty much one-liners ("Oh, I need a vector of maps from a pair of Foo to a set of Bar") that would either take a preposterous amount of code (or be a type-unsafe disaster waiting to happen) without generics.

Hopefully the final Go 2 generics proposal will capture some of this goodness without some of the horrifying C++ issues (error messages, bloat, sheer brain-numbing complexity).


"As for all the folks claiming they'll leave Go if it gets generics"

Baffled at this claim I searched and found one person who really seemed to be saying "if Go changes dramatically...".

This recurring notion that Go fans are anti-generic is not rooted in reality. Instead they simply didn't buy the "either it's there or the language is useless -- generics or bust!" argument that pops up in every Go discussion. It's a fine, if imperfect, language without generics. It's a better language with them.


I don’t know about threatening to leave Go, but I’ve definitely seen a lot of posts claiming Go is better without generics.


The more nuanced and better side of that is that any language is better without badly designed and badly implemented generics.


Maybe the even more nuanced side is that a language that can't conjure a safe and well designed generic has a badly designed type system in the first place


That sounds axiomatic.

How would you define "badly designed?" I can imagine a context where the "well designed" type system is very plain.


I really don't get the people opposed to generics. Is there actually a cross between experienced developers who have come from languages that have generics and understand them, yet don't want them in go? If so, why, and what do they use instead? Because go has no compromise-free answer for generics. You either lose type safety, maintainability, or performance.

I suspect a lot of the generics hate is due to a large chunk of the community coming from dynamically typed languages. In which case they're just have a negative reaction to unfamiliarity.


Increasing language surface area will generally increase the complexity of all api surfaces written in the language. This has obvious costs.

It’s true genetics will shrink some specific APIs, where they are a good fit.

But they will also be used opportunistically by developers excited to push their boundaries.

Of course you can say “just don’t do that” which works if you have a tightly controlled codebase. But most code is not that, and will be handed off to novices over and over for fixes.

So, it’s a question of whether you cater to the advanced developer who can capably handle a vast toolset, or do you commit as a community to more rudimentary tools, in order to reap the rewards of systemic simplicity.

There is no right answer.

I believe in the future all languages will fork into a simpler novice subset for general use and an expansive language for infrastructure. These will both be valid in the same parser, but the subset will be quarantined at the package management level.

Go, EZ-Go, C, EZ-C, etc

I am writing all of my code in EZ-JS


It’s C’s lack of complexity which makes it necessary to do dangerous things for everyday purposes. I’d much rather a novice interact with generics, where the compiler is a safety net, than with void * and interface{}.


void * and interface{} are not going away when generics come. Try searching for "Object" or "<?>" in some Java code...


Which is why Go 1's type system was a bad decision. interface{} should have never existed in the first place.

At least it'll definitely get rarer.


>> It’s C’s lack of complexity which makes it necessary to do dangerous things for everyday purposes.

That's not the case anymore. We have C++ which has generics and "fixed" C's lack of complexity for sure...Now for some 'weird' reasons some people still use C. Wonder why ?


Compare C++ to Standard ML. Simplicity is compatible with generics.


It is not the case since 1993, when CFront was dropped.

In certain domains like UNIX like OSes, I don't see C ever going away, due to the infrastructure, symbiotic relation with the OS that gave it birth, and the culture.


> Of course you can say “just don’t do that”

This exactly. The beauty of Go is that currently you do not need a book called "Go, The Good Parts".

Yes, generics would make a lot of things easier, but almost everything more complex and footgunnish.


>I believe in the future all languages will fork into a simpler novice subset for general use and an expansive language for infrastructure. These will both be valid in the same parser, but the subset will be quarantined at the package management level.

I don't think we'll see this happen much for existing languages, but it could be a very interesting angle for a newly-designed language (or rather pair of languages).


To some extent this is already happening in languages popular for machine learning. Libraries are written in C/C++ and the users just glue things together with very accessible API's.


It’s already happening. For example many people code in a subset of C++.


Although ultimately I do want generics in Go I am afraid they will make the language more difficult to use and understand. Generics in c++, c#, scala, java, etc all tend toward being very complex and change the way programs are written.

The focus moves towards a taxonomy of types and developers (myself included) sometimes get stuck on difficult type problems. There's something about trying to preserve type safety which sets the bar extremely high for bypassing the type system when there's not an easy solution and before you know it you've wasted 2 or 3 days writing code which doesn't actually do anything but placate the compiler.

And often that type-safe concoction you create is almost indecipherable when you come back to it later.

For example here's a project I worked on recently which cached a concrete version of a generated method using generics:

https://github.com/DataDog/dd-trace-dotnet/blob/develop/src/...

Used like this:

    var originalMethod = DynamicMethodBuilder<Func<object, byte[][], object, object, bool, T>>
               .GetOrCreateMethodCallDelegate(
                    redisNativeClient.GetType(),
                    "SendReceive",
                    methodGenericArguments: new[] { typeof(T) });

At least for me that was really hard to figure out how to do and I still have to squint to see what the heck its doing. The non-generic version wasn't type-safe and it wasn't as fast, but it sure was a lot easier to read and understand.

And to give some sense of the complexity involved, Rob Pike mentioned in his talk that the proposal spec for adding generics to Go is longer than the spec for the entire language.

I think the complexity is worth it, but I just hope we can be cautious about how and where generics get used in real-world code, otherwise we'll end up with gobbledy-gook that only experts can decipher... and that would be really sad, because the promise of Go was a language normal engineers could be productive with.


>Generics in c++, c#, scala, java

They're not all the same though are they.

C++/CLI had both compile-time generics (templates) from C++, and run-time generics from the CLR. And they could be complementary at times.

For compile-time generics, Dlang are a lot more sane that C++, having had the benefit of coming later, and dumping C compatability.

Similarly, CLR (C#, ...) generics had the benefit of being designed having seen Java first, so IIRC they're baked into the CLR. CKR generics were derived from work done at MS Research Cambridge, and I seem to remember Don Syme (F# creator) being rather proud of them. Disclosure: I was contracting at MSR Cambridge back in 2007.

Anyway, the golang designers will be aware of these implementations, so hopefully they'll come up with a nice design.


I'll answer from my perspective.

I don't hate generics. I see the value of generics, especially after having worked with Go for so long; I've had to do some mental gymnastics to get around not having proper generic collections.

That being said, I'm not excited about seeing generics in other people's code. The added complexity doesn't really solve problems I have anymore.

That being said, I'm actually way less excited about overloading.

I think generics and function overloading is going to make me think "where the hell is this coming from?" a lot more often and then I'm going to need to load it up in my IDE, or vim with way too many plugins, and start following definitions.


I think the main fear people have isn't the concept of generics, but the implementation of generics. Go's primary goal is simplicity, and implementing generics isn't simple.


But like I said, Go doesn't have an answer for generics. So the complexity just lives in userland code instead of the language itself. The problem and complexity doesn't go away.


If the choice is between badly implemented generics, and having the problem manifested as userland code, I'll take the latter. I've actually had to debug C++ production code produced by the confluence of templates that had no source code of it's own. At least with boilerplate, you can simply see what's going on right there.


If language features were free we'd likely have had generics for a long time now. Unfortunately generics is a trade-off: you get development speed/ease and pay for it in compile time, binary size and/or execution speed.

This seems to be slowly changing, but Go was designed to be a solution to Google problems - python being slow but some c++ applications taking literal hours to compile. Keeping that perspective in mind makes it easier to understand why Go maintainers have not accepted an implementation of generics into the language yet.


If you maintain type safety, you'll pay in increased compile times with or without generics. You'll either hand-roll an implementation for each type (https://golang.org/pkg/sort/) or you'll generate code before actually compiling. The problem doesn't go away.


I'm of 2 minds. I come from a Java background so I've personally wanted generics. ORM's are 1 use case that comes to mind.

But. There are many modern languages that already have generics. Why can't Go be that one that doesn't cave in and remains powerful in its niche and perhaps may never be the preferred tool in other areas? Why must it be useful for web development, microservices, DAL, etc.?

Any implementation of generics in Go will come with complexity tradeoffs.

I feel like the community learns more about languages and software engineering when we maintain language diversity and see the pros and cons of each approach in practice.

I want generics for selfish reasons, but would also like to see how a modern strongly-typed language solves problems without it.


As someone who quite likes generics, I'd love to see how a modern strongly-typed language solves problems without them. And I think that's what the Go community and developers have been trying to do up until now. It looks like they're giving up. I'm not sure whether we'll ever find another way to tackle composition/scalability as effectively as generics do, but such a technique would be fascinating to see.


It's people that have seen the abuses of C++ templates. They're very powerful and therefore people tend to want to use them for really complicated things.

Look up things like SFINAE and compile time metaprogramming. Templates were not intended for those things. When they work they're ok, but if they go wrong good luck following the 10 line error message.

Here's an example from Rust:

https://www.reddit.com/r/rust/comments/5nifpm/they_said_rust...

That's what people don't want.


That's not compile time metaprogramming at all, it's just series of wrapped closures. It's basically the sequence of function call names preceding that call backwards, with different capitalization.


The big difference between C++ and Rust is that, when you scroll down in this Reddit thread, it shows that the Rust guys have a clear path for fixing this issue, whereas there is no fix for this in C++ (that I'm aware of).


Ha ha, "10 line error". [ Puts on Monty Python voice] "What I wouldn't give for a 10 line error..."

Incautious use of the template mechanism - or just a minor typo - has given me couple screenfuls in the past.


Any sufficiently advanced type system can be used for compile-time metaprogramming - that's just an inevitable side effect of a type system expressive enough to capture all the more convoluted (but still plenty common) cases without hacks like interface{}.


For me at least, the reluctance comes from the new proposal process not having yet proved itself for large backwards-incompatible features. I want to be assured that a Go with generics is a Go with a really well-integrated feature, and not some grafted mutant appendage whose only real purpose is to appease the greater community.

It's great that they're trying out the process on smaller, more simple proposals. My hope is that this system will either produce really good features, or reveal that there are simply no satisfying solutions.


Coming up with a complex solution is much easier than finding a simple one. Go forces you to find those simple solutions. There are places where generics are the only solution, but they also enable lazy design.


I'm hoping things like generics can just be accepted to be a good idea moving forward and that we can all agree that languages without them are handicapped.

I still fell things are up in the air about exceptions but maybe we can just agree on generics which would make me feel better.

Using go without generics just felt insane to me...


Yeah, Go has some really fantastic aspects around tooling and a good concurrency story, but it's otherwise such a huge step backwards. I can't fathom the reason for not having generics. It's such a simple, completely common-sense abstraction.

Things like typeclasses or multimethods offer vastly more abstraction power. These are (a bit) more difficult to understand, but you certainly don't have to be a genius (take it from me). I can kind of get why a language targeted towards "average" programmers might want to omit these.

But here's the thing: The less abstraction power a language has, the more complexity must be handled by the developer. This leads to things like Java Spring, which you do have to be a genius to understand.


s/genius/masochist/g


>"Oh, I need a vector of maps from a pair of Foo to a set of Bar"

This is a fun example since you can do that in Go now, as it comes with a generic vector (essentially) and map :D

I think that was the design decision, 95% of generics use cases are covered by having growable arrays and maps, so why clutter the language with generics.

I like generics, I think most people who dislike them are coming from C++ templates, which has downsides that don't exist in newer generics systems (such as in C# or F#)


What are generics?0


Rather than writing multiple functions that take different types as args and return different types (say int8, int16, int32, etc.) but do the exact same thing, with generics, you can instead write one function that takes a number (which could be any int type) and return a number, and you only wrote one function. That is generics in a nutshell. The function is generic, not specific to one type.

Generics allow for less code, thus they are easier to debug, test and reason about. I've used them a lot in C++ and I do miss them in Go. It's not a deal breaker for me either way, but for people who write larger, more complex code, not having them makes it more difficult (more code to write, test and maintain).

C does not have generics either. Go is a lot like C in this regard.


C has minimal generics support since C11.


Wouldn’t this make golang slow though? Isn’t that what people claim makes python slow?


In a ahead-of-time compiled language probably not. The generic function is just a code generator for several functions that will get called in the right places, generated and put in the right places by the compiler, and then get optimised as per normal.

And no, being a dynamic language is not what makes Python slow. Lua, Nim and Scheme are all examples of dynamic languages with fast implementations.


- Generics do not make runtime performance slow, but it sure does make compile times slower (although it really depends on the type system and its implememtation) For example, C++ templates, although very powerful, makes build times order of magnitudes slow when used poorly. One of the most important features of Go is its fast compile times, but generics/contracts can potentially slow it down a lot.

- Yes, being a dynamic language slows it down a lot. For example, when evaluating a+b, the interpreter has to check the types of the variables a and b at runtime beform performing the right form of addition (it might be an int, or a string, you dont know). Even with JIT (just in time compilation) the compiler has to initially “guess” that the variables are numbers, and fall back if that is not the case. Statically typed languages do not have this problem, because you know the types of variables beforehand at compile time.

By the way, Nim is not a dynamic language. And LuaJIT is one of the the fastest dynamic language implementations because the language is very simplistic (compared to Javascript/Python/Ruby) and Mike Pall is a robot from the future...


> the interpreter has to check the types of the variables a and b at runtime beform performing the right form of addition (it might be an int, or a string, you dont know)

That's not necessarily true. In Scheme, a very dynamic language, the compiler will generally make choices about memory layout of the various values before launching into the evaluation phase. Where safe or possible to do so, it will probably reduce the number of choices that will have to be made at runtime, at compile time. The interpreter may not have to lookup what the data type is, because it may just have two bits of memory and an instruction to call. You can know what the data will be, and optimise for it. [0]

There's no reason that: b = 1; c = 2; a = a + b needs to be slower than a = 1 + 2 if b and c are never referenced before or after. But in Python, it is, because the design makes it harder to know whether or not an object should get optimised away.

> And LuaJIT is one of the the fastest dynamic language implementations because the language is very simplistic

Right, Lua is dynamic, and one of the design choices was making it simple, and so it's easier to make it faster.

[0] http://home.pipeline.com/~hbaker1/CheneyMTA.html


Didn't know about Scheme. Thanks for the link!


If you've never met Scheme, then SICP [0, 1, 2] may be something that can change the way you program. It certainly made me better, or at least a deeper understanding.

[0] https://www.youtube.com/watch?v=2Op3QLzMgSY&list=PLE18841CAB...

[1] https://www.amazon.com/Structure-Interpretation-Computer-Pro...

[2] https://web.mit.edu/alexmv/6.037/sicp.pdf


Now I have to ask, what would you say does make python slow?


Design choices, to be blunt.

Some of it is simple stuff, like CPython being interpreted, so PyPy get's a huge boost by JITting.

Some of it is much harder stuff, like the way Python is designed to store objects in memory (a list is a pointer to a contiguous space of pointers to things that might be pointers...), and everything that hangs off each object (everything has a dict), and the awful GIL ([0]). Awful for performance, great for thread-safety.

Every single part of a language design has trade-offs. It depends on what you're trying to do whether or not they help you, or hinder you.

Sometimes you change how you're doing things because priorities change, and get a radical improvement, like Python's 3.6 Dict. Most of the time, you don't. One step at a time. Not being able to break backwards compatibility hinders the designer, if they realise a trade-off they've made was a mistake. So you get stuck with some features you'd rather not have.

[0] https://wiki.python.org/moin/GlobalInterpreterLock


To be fair, all of the aspects of the language itself you mentioned apply to JavaScript (save perhaps differences in how literally it takes `int`, etc. being objects vs. JavaScript's unboxed `number`, etc.). I suspect most of the difference is that one has had three of the largest corporations in the country competing to have the fastest implementation, in some cases for decades, and the other hasn't.


I think a bigger reason is that Python prioritizes simplicity of internal implementation.

To be sure, the internal implementation of CPython (the only one I'm familiar with) is complicated as hell. But it's a whole lot simpler than any fast JavaScript runtime I've ever used. I think that's the result of a conscious choice by the maintainers/BDFL/community in Python, not a side effect of it having less funding (assuming that's the case).


The primary difference is Javascript has serious JIT's as the normal case of running Javascript while Python is normally interpreted.


I don’t think it necessarily impacts the runtime (though it could).

This could be my naïveté, but I don’t see why generics in Go couldn’t work analogously to the way they do in TypeScript, just a way for the compiler to verify the correctness of code accessing an interface{}.


No. C++ is very fast. It has generics.


Also, very slow to compile.


A simple example of generics is generic collections.

Without generics, if you need a list filled with TPS Reports you basically have 2 choices:

- Use the 'untyped' List that deals with objects. You will have to cast to TPS whenever you look up an item and you will have to make sure that no one accidentally adds a Timesheet to this list.

- Write a custom TpsReportList. I'm sure you'll be able to write an implementation as efficient as the language creators. Oh an if you copy and past from TimesheetList, don't forget to check all names. `tpsReports.AddTimesheet()` is just embarrassing.

With generics, you will have a `List<T>` type. If you need a list of TPS Reports you will use the type `List<TpsReport>`. If you need a list of Timesheets, you will use `List<Timesheet>`.

You can't add the wrong type of item to such a list, you will always get the declared item from that list and you can't assign an instance of one type to a variable of the other type.


As a newcomer to Go, by an immensely wide margin, the hardest, most frustrating thing, which soured the language for me, is whatever the heck package management is in Go.

There's like three or four different angles, all of which overlap. Some are official. Some aren't. The unofficial ones seem more popular. They're all kind of incomplete in different ways. And it was all such a frustrating migraine to try and figure out. I haven't felt so viscerally aggressive about software like that whole experience made me feel in a long time.

I hope Go2 makes something concrete from the start and sticks with it, for better or worse.


Having used the new Go module system (introduced in Go 1.11 as an option, to be the default choice in 1.12) since August, it's my opinion that this is now a solved problem.

The biggest source of pain moving forward is going to be the projects that haven't transitioned, including the various command-line tools that work on parsing, generating and manipulating Go code (e.g. linters, code generators). Most of the important ones are already there, and I've transitioned several myself.

As an added bonus, word is that the Go team wants an official package repository system (similar to Cargo, RubyGems etc.). I wouldn't be surprised if this happens rather quickly.


> it's my opinion that this is now a solved problem.

It's starting to look like a viable solution, but it's not even close to actually solved yet. Why does `go mod why <module` make changes to your module? How do you run go get to install a remote package when modules are enabled (without explicitly running `GO11MODULE=off go get`)? Why isn't the module cache concurrency safe? Why does the module cache sometimes mysteriously cause compile errors until you `go clean -modcache`? There are so many little bugs and oddities.

And as you mentioned, a lot of things have side effects now that didn't use to, which has catastrophically broken a lot of the tooling surrounding the language. Autocomplete using gocode used to be nearly instant. Now it sometimes triggers downloads and takes 30+ seconds.

I'm hopeful that go 1.12 will be the first release where this problem is really solved.


There are bugs, but I was referring to the design of the whole thing.

By the way, your "go get" bug was fixed today [1], if I understand your complaint correctly: With the new modules turned on, you could no longer do "go get" globally.

(I would agree that it's a little weird that "go get" outside a module installs it globally, while inside a module it installs it locally; that's going to trip up scripts and Dockerfiles, and it should really be two separate commands. "go mod add" to add a new dependency, for example.)

[1] https://github.com/golang/go/issues/24250#event-1996119923


There are still some rough edges, but I agree that the new module system is very good. And it's given me the confidence in Go to start using it far more broadly than I had before, when every new project meant hang-wringing over how to handle dependencies. Now it Just Works well enough for 90+% of use cases, with no extra steps required. It just works.


I just started learning Rust for a small project, and I found its "one clear path" model very appealing (I don't know if it is a formal goal, or if it's just a happy accident based on a smaller, more focused, community). It doesn't just apply to the package manager, but that's one of the first bits a beginner sees. Rust has a single, easy-to-find, "pretty good" answer for nearly every question a beginner asks, at least, it has, so far, for me. Cargo is that answer for packages and for building, and it's pretty good, and there's no debate about how to install or build Rust tools and libraries.

I didn't really find that to be the case with Go, so even though Go is a simpler language, I have been more productive, much more quickly, in Rust. Within a couple of hours of starting my project (with only a cursory glance over the docs and tutorial) I had my project daemonized, had options parsing working, had system calls working, got logging working, etc. It was shocking how quickly I was up and running.

I'd been hesitant to try Rust, because it looks big, and I'm kinda tired of big languages. I just don't have enough time/motivation to study a bunch of nuanced syntax and such; I'll never be a great C++ programmer, though I can muddle through and usually understand other people's C++ code. But, so far, Rust is proving to be one of the easier languages I've learned lately, partly due to the holy path being well-defined, and partly due to strong libraries that overlap my particular project perfectly (being a systems language built by very smart people, it has some very good systems libraries).

I don't mean to imply Go is a hard language, it's not. I picked it up pretty quickly, too. Both are easier to learn than, say, JavaScript, because they're much smaller. But, I agree that there isn't a super clear path forward for a beginner with Go, including with installing and building. You just have to acquire more tribal knowledge to work in Go than in some other languages (but, much less than many others...older languages tend to have tons of that kind of thing).


As a beginner, I really like rust.

It has excellent package management, robust compiler messages, pattern matching... etc. But once I start trying to build non trivial data structures it becomes a nightmare. For example, doubly linked list, any sort of graph is extremely hard for me to build in rust but extremely easy to build in golang.

I am still learning the full capability of rust, hopefully as I do more practice it gets easier. In the worst case I think I would still use it to build data pipelines since I really enjoyed the syntax and the safety check as long as I don't get into smart pointers or raw pointers.

Also another annoying point is that some libraries use nightly.


I have heard this is a really good resource for understanding this exact aspect of Rust

http://cglab.ca/~abeinges/blah/too-many-lists/book/


I have to ask why you're not using a data structure from a library, though.


You're not the only one having trouble writing basic data structures in safe Rust: [1]

[1] https://rcoh.me/posts/rust-linked-list-basically-impossible/


Of course, if you're comparing with Go, you need to compare apples to apples - since Go doesn't have ownership tracking, the equivalent Rust would necessarily have to be unsafe.


Yeah but Go has garbage collection. Rust's ownership model makes up for not having GC.


I think that heap-allocating everything and using Rc<T> and Weak<T> would be a closer comparison to what Go does, than using idiomatic Rust with borrow-checked locals etc.


Rust explicitly set out to have an excellent built-in package management solution. They tapped Yehuda Katz early on for this.


> They tapped Yehuda Katz early on for this.

After trying three or four times to build one on their own that is!


Rust easier to learn than JS lol... In Rust you have issues that will or will not overcome easily, nothing like that happen in Go. And seriously getting starting in Go takes less than an hour:

- Install Go

- Install VSCode + Go plugin

- Start working


I explicitly said above that Go is an easy language to learn. But, I found Rust easier.

And, yes, I'm finding Rust easier to learn than JavaScript. Without question. It is a much, much, smaller and more cohesive language. It has some new concepts (features or techniques that I have never used in any other language), but so does modern JavaScript, and with JavaScript there's usually five ways to do it, and half of them are really poorly thought out. I don't dislike JavaScript. I'm not saying one shouldn't learn some; one definitely should. But, it's definitely going to be "some", for most people, because there's just too much of it to learn it all, unless you can devote yourself full-time to being a JavaScript expert. I aint got time for that.

As I mentioned, I've built a (toy) project in Go. I've spent more time with it than Rust at this point. I know how it works, and what getting started looks like. And, though Go was pretty easy with few pain points, I've found Rust to be easier and to provide a more clear path for a beginner to follow, so far.

Everyone is different, and we're all coming from different places. No one has exactly the same set of starting conditions for learning a new language as I do. For some, Go may be easier than Rust (I expected it would be, which is why I started with Go and avoided Rust for so long). For me, I am finding Rust easier. I haven't done much with it, but I was surprisingly productive surprisingly fast.


I for one totally agree on rust being easier to learn than Javascript. I suspect it is because of the explicitness of rust vs the flexibility of javascript where you can do literally anything you wish without any sort of guard rails, then there's the ecosytem with like a gazillion tools in your build pipeline. I really admire front end devs because they do what I cannot, I guess i'm not as clever a programmer, I need the compiler to hold my hand , lead the way, and yell at me when I am going astray, rust does that for me , and when I'm done obiding by the rules, cargo is there to take over the rest of the process.


I don't really care about the guard rails. I grew up in Perl (pre-strictures and warnings), and BASIC was the first language I ever built something in, so I'm not too fussed about things always feeling a bit loose, and having to defend yourself with tests and imposing some rules on yourself by convention.

While I think it's likely that having a stricter language is more likely to produce better software, especially as complexity grows, I don't think it has a huge impact on whether the language is "easy to learn" for me. Python is very lax (contrary to popular belief, Perl with strict/warnings is stricter than Python, and protects you against a wide variety of scope-related bugs, in particular) but I consider it an easy language, too, because it is consistent and small(ish). There's some kind of balance to be struck between elegance and simplicity, and between explicitness and concision, and Rust feels very good, so far. I don't think that balance will be the same for everyone.

JavaScript, to me, is hard just because it's so damned big and incoherent. It's been pulled in twelve directions at once for its entire life, and it shows. JavaScript is like a buffet that has Chinese food, Indian food, pizza, sushi, and tacos. Most of the food isn't very good, but there's a lot to choose from. It doesn't help that learning JavaScript also entails trying to make sense of the maelstrom of tooling that's available. While Rust has one clear path for beginners, JavaScript has a haunted corn maze.


Coincidentally, here's Rust's getting started workflow:

- Install Rust

- Install VSCode + Rust plugin

- Start working


Coincidentally, here's JavaScript's getting started workflow:

- Command + Option + J (on Mac+Chrome)

- Start writing JavaScript


For an apples-to-apples comparison, you would have to compare this to https://play.rust-lang.org and https://play.golang.org


My only experience with Go was around 2012-2013. It was fun, but I did not bother sticking with it. The forced directory structure was a little bit annoying and I found myself writing interfaces for everything. I'm sure people will say it's my fault as a programmer, but it turned me off from the language.


The new module system removes $GOPATH and the forced directory structure.


Go modules are available in go 1.11 today. If you're envisioning another new pkg management solution besides that, I don't think that's going to happen.


The fact that it didn't have an official package manager from day 1 is a big issue to me. All other new languages have an officiel package manager - elixir, rust, ... https://github.com/golang/go/wiki/PackageManagementTools


go1.11 pretty much solved this, though there are still many dependency managers out there and projects which rely on them that should be upgraded to support the New Way.

"package" in Go has always referred to a folder. Or in other words, a shared namespace for every entity exported by every file in that folder. "module" refers to a package which has a go.mod file in it; that module package + all of its children become versioned via that go.mod file and are distributed as one unit.

It operates identically to npm; npm packages are folders, and there's a special package with a package.json which versions that folder and all of its children. Then, that "package.json package" is what is distributed.


It's ironic from a language that invented gofmt...


I'm curious what language you come from that has much better package management than Go. I'm guessing not JavaScript, or C++, or Java, or Python, or...


I tried Go a while ago. I was hooked by the performance , the community around it and vendors support ( AWS , Heroku , GCloud etc...) but I got quickly fed up by the awkward package management system, the weird syntax and the horrible idea of $GOPATH, especially on Windows.

Haven’t tried it since.

Hope lots of this change to make the language more welcoming for Newcomers to the language.


If $GOPATH was your biggest complaint, now may be a good time to give it another look. As of 1.11, there's an experimental feature called go modules that lets you avoid using GOPATH. I believe it's going to be non-experimental starting in 1.12.


That's terrific news! GOPATH and the file system conventions are horrible for me as well. It forces me to break my personal conventions and workflow that I use for every other language. I avoid using go for new projects now because it got to be so annoying and disruptive (a somewhat shallow reason, I know).


Don't get too excited. You don't have to place your projects within $GOPATH anymore, but all your dependencies are still forcefully downloaded to – and imported from – the shared $GOPATH. AFAIK they didn't provide a way to have your dependencies localized to a subdirectory of your project's root. The documentation for "go mod vendor"[1] makes it seem like it should accomplish that task, but I couldn't get it to work for initial pull of dependencies – it only worked after dependencies were already downloaded to $GOPATH, at which point it was willing to make a copy of it within the project.

[1] "... or to ensure that all files used for a build are stored together in a single file tree, 'go mod vendor' creates a directory named vendor in the root directory of the main module and stores there all the packages from dependency modules"


That's not a shallow reason: the annoyance in aggregate motivated a language usability improvement with no regressions.


That ended up being the biggest hurdle for me. I wanted a single repository with some Go source code, some Python, some C++, and I didn’t want to have to put the repo in a specific place or set environment variables for every project.

Nowadays I just put my Go source code in <repo>/go/src/example.com/pkgname and that works well enough, but it's a bit clumsy and reminds me of bad experiences navigating Java source trees. I haven’t switched to modules yet but I will once I get 1.12 everywhere.


> there's an experimental feature

That's not a feature but a fix for a design problem, and currently the only fix is an experimental one.

For those seeking to invest their time learning a professional tool, that's a whole pile of no-nos that naturally point to a very hard pass.


Go modules are extremely suitable for current work; they're as un-risky as anything labelled "experimental" could be. I've been using them for a boring professional application for about 4 months now, and there are no hassles with using them.


> they're as un-risky as anything labelled "experimental" could be.

Yes, that's exactly the point, and why any decision to take a hard pass is more than obvious.


A ton of projects have upgraded to work well with modules already. Seriously, it's set and forget in your zshrc / bashrc / whathaveyou.


> Seriously, it's set and forget

If that was true they would not be euphemistically labelled as experimental.


I don't understand what you mean, sorry


Think of it this way: if you spend time learning and using a feature that is not guaranteed to be there, say, a year down the line, is that a good investment?


> a feature that is not guaranteed to be there, say, a year down the line, is that a good investment?

The Go devs have been extremely clear that Go modules is THE solution to Go packages.


If they do mean what they say then they will release a version of Go where the GOPAH nonsense is over and a fix for that design problem is not euphemistically described as experimental.

Until then, no LTS means not ready for production.


GOPATH is still being used internally by Go modules, but it's set up automatically and you never have to even know it's there. It will probably remain an option to use it as a user for a while due to backward compatibility.

I don't see how that prevents you from using Go modules.

> a fix for that design problem is not euphemistically described as experimental

The simple reason for it being "experimental" is because Go 1.11 is the first release to include it, not because of some inherent instability. Wait till February for Go 1.12 if you're so worried.

> no LTS means not ready for production

I don't know what you mean by LTS here, since every Go release as of 1.0 has been backwards compatible.


Learning Go modules is a minimal investment. It is rather hours than days.


Its not like there is much of a barrier there.


For some people and organizations "as un-risky as anything labelled "experimental" could be" is risky enough to be automatically being disqualified from even being considered. There's the label 'experimental', so it can't be used anywhere touching the production codebase regardless of any other arguments.


Experimental and production are two words that should be used together.


Same here. I've been using them in production with no issues.


"All that is excessive is insignificant.", said Talleyrand.

By your standards c and c++ are not professional tools.


Not the OP, but uh, no neither of those are "professional" in the sense that professional should use them, regardless of the plain fact that "professionals" do use them.

Those are both garbage languages, despite their utility.


I don't understand the issue with language improvements, should we only use languages that were perfectly designed to start? Like JavaScript?


> should we only use languages that were perfectly designed to start?

If there are a myriad of languages and we do have limited time to invest mastering a language, we better use our time wisely and not waste it with those with severe design problems and designers who have refused to face those issues for years.


Weird syntax? Maybe you should try Haskell, OCaml, Erlang, or even Rust instead. Then come back to Go and tell me if it's actually that "weird."


It's a bit Pascale-y, which may be a turn off to some.

I do wonder why you'd consider Rust, (or even Haskell), to have weird syntax?


Rust is probably the least weird of those I listed. I find its syntax "too busy", similar to C++.

Haskell, I feel, needs no explanation.


Rust syntax being busy is something I've heard, but people point to things like lifetime annotations, which sort of make Rust what it is so...


Also, lifetime annotations are not required on the overwhelming majority of functions. If you have, say,

  fn substr(text: &str, start: usize, length: usize) -> &str;
the compiler will infer automatically that the return value inherits the lifetime of the `text` argument. I don't find that line up there particularly busy.


>I tried Go a while ago. I was hooked by the performance , the community around it and vendors support ( AWS , Heroku , GCloud etc...) but I got quickly fed up by the awkward package management system, the weird syntax and the horrible idea of $GOPATH, especially on Windows.

My experience was similar. In addition to those, I found I really missed a REPL console and moreso something like byebug that RoR has. For those that aren't familiar, byebug lets you put the command "byebug" anywhere in your code that opens an in context REPL. It's enormously helpful for hard to figure out bugs.

The other thing that turned me off from Go was testing. It has good enough support for unit testing but it really lags behind in integration testing. Sometimes you want to know that if you hit this endpoint with this payload you get this response back. It's harder than it should be to write integration testing where you spin up the application and test it end to end.


> In addition to those, I found I really missed a REPL console and moreso something like byebug that RoR has. For those that aren't familiar, byebug lets you put the command "byebug" anywhere in your code that opens an in context REPL. It's enormously helpful for hard to figure out bugs.

This is like comparing apples and oranges, or at least, like comparing apples and apple-orange hybrids :)

Just saying that Rails (RoR) has a command like byebug that opens an in-context REPL, seems to ignore the fact that compiled languages like Go catch many errors earlier, at compile time, so you don't even need an in-context REPL or a debugger to find those.

Not saying that features like byebug have no use at all, of course.


Is this the same type system that famously forces you to cast items in your collections to the universal supertype?


You know well that it is. Add /s or /rhetoric to your question next time :)

Languages can be pretty useful even if they are imperfect, as others have said. And Go is plenty useful. The Stroustrup quote about C++ comes to mind ...


Here you go:

https://en.wikiquote.org/wiki/Bjarne_Stroustrup

"There are only two kinds of languages: the ones people complain about and the ones nobody uses."

And here's a good set of Q&A from his FAQ; it includes the above quote too:

"Did you really say that?":

http://www.stroustrup.com/bs_faq.html#really-say-that


Everyone's aware of the lack of generics, it's been rehashed to death. This is a discussion on Go 2, which will add them.

The current type system is poor, but still better than dynamic languages.


> which will add them.

"Generics" can be a lot of things in a lot of ways. AFAIK there is no official Go design for generics, or a list of the deficiencies they will have, so a blanket "this particular problem will of course be solved well" is speculative.


Elixir's IEx.pry/0 function, that opens a REPL at a particular point of execution, is probably the stdlib function I call the most while developing. In a compiled language, to boot.


Wow, that's cool. Wonder how it works. Does it spawn an instance of the REPL with your app and its current state?


>This is like comparing apples and oranges, or at least, like comparing apples and apple-orange hybrids :)

>Just saying that Rails (RoR) has a command like byebug that opens an in-context REPL, seems to ignore the fact that compiled languages like Go catch many errors earlier, at compile time, so you don't even need an in-context REPL or a debugger to find those.

I definitely liked the type system of Go and the compile time errors it found. It gives you a lot of peace of mind and reduces some errors. However, those errors aren't the things I use byebug for. Byebug is for things like figuring out why your code went down Path A when you expected it to go down Path B. Or if you don't know how to do something it's a sandbox to try a few different things until you get the output you were looking for.


>Byebug is for things like figuring out why your code went down Path A when you expected it to go down Path B.

Got you now. The classic use case(s) of an REPL. I may have misunderstood your earlier comment, which is why in my own earlier on, I had said "seems".


Yeah, and having access to a REPL can catch semantic and architecture issues before you finish integrating and fire up the whole application. As do testcases.

Compilation is not a magic bullet. Although, to be fair, I pushed Golang at work precisely because you have to compile it first. Not everyone is diligently testing their code...


>Yeah, and having access to a REPL can catch semantic and architecture issues before you finish integrating and fire up the whole application. As do testcases.

True about the semantic and testcases parts. Not clear how it helps with architecture.


There's gomacro, which is a Go REPL with debugging:

https://github.com/cosmos72/gomacro

It would be awesome to have a REPL in the standard Go distribution, and I definitely feel the lack: even Java now provides one (JShell).


>There's gomacro, which is a Go REPL with debugging: https://github.com/cosmos72/gomacro

Will check it out, thanks. Had just been thinking whether there is any Go interpreter some time ago. I remember C having at least one, back in the day.

>It would be awesome to have a REPL in the standard Go distribution, and I definitely feel the lack

Agreed. And it should be more like IPython (command-line version) than like the stock Python shell.

>even Java now provides one (JShell).

Good to know.


I think the closet Go equivalent of 'byebug' would be delve[0].

It's got more of a learning curve than a REPL, but very powerful. Editor plugins for Go also tend to have good convenience support for delve, making using it less painful. For example, vim-go[1].

[0]: https://github.com/derekparker/delve/blob/master/Documentati...

[1]: https://github.com/fatih/vim-go/blob/master/doc/vim-go.txt#L...


> For those that aren't familiar, byebug lets you put the command "byebug" anywhere in your code that opens an in context REPL. It's enormously helpful for hard to figure out bugs.

Isn't that what a normal debugger does? Or am I simply just living in lala-land because of C#.NET/Visual Studios terrific debugging experience?

I remember php and the hell that was xdebug. It was much easier and more efficient to simply to a `var_dump` whenever you needed to debug.


You might like Rust. It has a much better story in these areas.


Weird syntax? I’m sorry this doesn’t look like your favorite language.


hah. GOPATH is the only thing I like about Go. I have all (non-Go) repositories cloned as URL-style paths e.g. ~/src/github.com/user/repo.

I strongly dislike the non-standard internals (horrendous custom assembler, direct usage of syscalls instead of libc) and the "developers are too stupid to use this" attitude towards modern language features.


> direct usage of syscalls instead of libc

But go isn’t built on top of C? Why would they add more dependencies that complicate and slow down the compilation process and hurt portability?


No, by default Go programs and the whole Go toolchain has no C dependencies. The whole Go toolchain is implemented in Go and it doesn't like into any C libraries. Go links against libc only if you either use CGO or a package which has C dependencies, but I am not sure whether there is any package left in the standard distribution, that does.


Same for me. I moved all my code into the GOPATH pattern after finding that I had 3 separate clones of the Linux kernel in separate places. (And not just for development, just for reading and grepping.) I made myself a helper tool for navigating the GOPATH:

  $ cg gh:torvalds/linux # cg = cd to git repo
  $ pwd
  .../src/github.com/torvalds/linux
The code is at https://github.com/majewsky/gofu#rtree if anyone's interested.


> direct usage of syscalls instead of libc

Why do you want to use libc for Go? You wouldn't use the Rust standard library for Go, why the C standard library?


It is very common for language runtimes to link and depend on libc on Unix, even if the libc API is not directly exposed in those languages. Go is somewhat unusual in this regard.

MacOS doesn't guarantee backward compatibility for direct syscalls. This has caused bugs like this with compiled Go binaries:

https://github.com/golang/go/issues/16570

Go has very recently started using libSystem (which is analogous to Linux libc or Windows CRT) on macOS to avoid this issue.

On a more philosophical level, POSIX is defined in terms of a C standard library, and not using libc means Go doesn't support and/or must implement itself various features that are otherwise provided to POSIX applications by the system (like locale handling). Your mileage may vary in terms of whether that's a bad thing or a good thing.


> It is very common for language runtimes to link and depend on libc on Unix

It is, and this is where you have to choose between a fragile executable that linked to a specific vendor and version of libc (glibc, musl, etc.). or a bloated executable that statically links it.

> MacOS doesn't guarantee backward compatibility for direct syscalls.

Sounds like Go should use the stable API MacOS does offer. If the stable API is libSystem (different than libc), then so be it.

But if we're talking about Linux libc, there's no reason for Go to use it.

> POSIX is defined in terms of a C standard library

Specifically POSIX.1 (not POSIX.2).

Also, for what it's worth, POSIX compatibility falls short of Go's goals for compatibility. Specifically, on Windows. So I'm not sure what there is really to gain by following a different language standard for a different set of platforms that you desire to support.


It's also worth considering that none of the major systems are actually POSIX-compliant today. You're always going to need to make specific allowances if your standard library supports anything beyond the absolutely trivial even if you ignore Windows.


On glibc-based Linux you generally need to compile on a system running the oldest (or close to) version of glibc you want to support. I suspect this was the singlehanded reason for Go being built that way.

That said, on macOS it makes a lot less sense because libSystem's compatibility guarantees work a lot like those of Win32, where you can safely compile on newer versions of the OS as long as you don't actually use features that are newer than your deployment target.


Windows syscalls are exposed via User32.dll, Kernel32.dll and friends, not MSVCRT.dll.


Actually, the issues you mention are already fixed by Go modules.

EDIT: Removed a parenthetical that was based on a misreading of the parent.


I believe they’re saying that vendor support is a good thing in Golang.


Thanks; I definitely misread. I deleted the relevant bit from my post.


No oxford comma:

the performance, the community around it, and vendors support


FWIW GOPATH has not been required since 1.8. It now defaults to $HOME/go if not set.

Go is pretty easy to get up and running in Windows. There's an installer for the compiler and you can install vscode and the Go extension pretty quickly.

Windows is an afterthought for most programming languages (ever try ruby or c++?) and Go's cross-platform capabilities were a breath of fresh air.


I agree with the package management system and $GOPATH. I would add that is awkward that most of the libraries are not thread safe when go routines are core to the language.


Same here. For me it was primarily the syntax. So many people think that syntax is something you get used to, but I don't. Syntax matters a lot for me, and the way Go does it just isn't compatible with my brain.

Regular grammars are great for parsers. But really, having an easy to read (conceptually!) language is way more important imho.

But call me crazy when I say that I like C++ and can read it effortlessly :)


Well of course you get used to it, like you did get used to C++ (unless you were born that way which I doubt).

It just takes time, you don't want to spend that time getting used to Go and that's perfectly understandable.


I can get used to something and still dislike it.. with Go it's mostly the bracket style, or put differently, the inability to turn off automatic addition of semicolons before parsing. I'd actually rather "have to" put semicolons manually, but no, I have to suffer so others don't have to put an additional line into their style guide.


Yeah I completely agree. Though I think when you really get used to a language and understand it deeper, you tend to understand the trade-offs that were made, if the language is well designed. Then you can still think that the trade-off don't match your requirements.


"you'll get used to it" is a bad measure of usability.


But so is "I am already used to it," and so how do you find an objective, non-biased, measure of what is usable?


C++ is definitely something to behold in terms of syntax. It's hard to justify much of the type-system syntax (meta-templates, `sizeof...(Args)`, and `std::forward<Args>(args)...` come to mind). I'm picking on argument-packs here and ignoring the beauty of initializer-lists, operator-overloading and user-defined literals (which allow great syntax in user-code but their own syntax is haunting in library-code). I doubt that C++ would look the same if the syntax were created from scratch with the current set of features.

So it's fair to say Go's syntax is unfortunately limited in expressiveness (and I too find myself mystified by how to express ideas neatly with go), but I have a hard time imagining C++ is something worth emulating without adding a whole bunch of subjective caveats to what "should" be avoided.


If you like c++ then you just haven't given Go enough time to sink in.

It reads more left to right then right left like c++.


I've given go plenty of time to sink in, and I still prefer c++. Things like not allowing implicit type conversions on things that clearly are not a problem (int16->int32) are just annoying.


Call me crazy but deep inside I'm a toaster.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: