Hacker News new | past | comments | ask | show | jobs | submit login
Swift: Madness of Generic Integer (krzyzanowskim.com)
148 points by krzyzanowskim on March 3, 2015 | hide | past | favorite | 75 comments



To summarize, the author has two problems:

1. Bit shift is not part of the IntegerType protocol, when it should be (although the author could avoid the issue by accumulating the bytes in a UIntMax instead of the generic type).

2. Construction from (and conversion to) a UIntMax bit pattern is not part of the IntegerType protocol, when it should be (done correctly, this addresses the author's sign and construction complaints)

The author incorrectly claims/implies that these are problems with generics or protocols or the diversity of integer types in Swift. They're really a problem of omissions in the standard library protocols that are forcing some very cumbersome workarounds. The necessary functionality exists, it just isn't part of the relevant protocols. Submit these as bugs to Apple.

Edit:

As a follow up, here's a version that gets around the standard library limitations using an unsafe pointer...

  func integerWithBytes<T: IntegerType>(bytes:[UInt8]) -> T? {
    if (bytes.count < sizeof(T)) {
        return nil
    }
    var i:UIntMax = 0
    for (var j = 0; j < sizeof(T); j++) {
        i = i | (UIntMax(bytes[j]) << UIntMax(j * 8))
    }
    return withUnsafePointer(&i) { ip -> T in
        return UnsafePointer<T>(ip).memory
    }
  }
Of course, at that point, why not simply reinterpret the array buffer directly...

    func integerWithBytes<T: IntegerType>(bytes:[UInt8]) -> T? {
        if (bytes.count < sizeof(T)) {
            return nil
        }
        return bytes.withUnsafeBufferPointer() { bp -> T in
            return UnsafePointer<T>(bp.baseAddress).memory
        }
    }


The more protocols that are added, the more concepts there are to scare people away from what they think of as relatively simple primitives. 26 is already a scary number of concepts to tie to simple whole numbers.

To be clear, the complexity is inherent in using numbers programmatically. The only real way around it would be to reduce flexibility around overloading operators, forcing people to implement bundles of related operators that are all associated with a concept (protocol). This would decrease the utility of overloading for implementing some algebras.


I definitely agree that, by far, the best way to do this is to just use type punning with pointers. But if you really want to rebuild this yourself, you can still do so without the unsafe pointer, by handling signed and unsigned separately.

Unsigned is easy:

  func integerWithBytes<T: UnsignedIntegerType>(bytes: [UInt8]) -> T? {
      if bytes.count < sizeof(T.self) {
          return nil
      }
      var acc: UIntMax = 0
      for var i = 0; i < sizeof(T.self); i++ {
          acc |= bytes[i].toUIntMax() << UIntMax(i * 8)
      }
      // UnsignedIntegerType defines init(_: UIntMax)
      return T(acc)
  }
Signed is trickier because of the sign bit. You have to sign-extend manually:

  func integerWithBytes<T: SignedIntegerType>(bytes: [UInt8]) -> T? {
      if bytes.count < sizeof(T.self) {
          return nil
      }
      var acc: UIntMax = 0
      for var i = 0; i < sizeof(T.self); i++ {
          acc |= bytes[i].toUIntMax() << UIntMax(i * 8)
      }
      if sizeof(T.self) < sizeof(UIntMax.self) {
          // sign-extend the accumulator first
          if bytes[sizeof(T.self)-1] & 0x80 != 0 {
              for var i = sizeof(T.self); i < sizeof(UIntMax.self); i++ {
                  acc |= 0xFF << UIntMax(i * 8)
              }
          }
      }
      // We're assuming that IntMax is the same size as UIntMax and therefore
      // uses init(bitPattern:) to convert, but that should be safe.
      let signed = IntMax(bitPattern: acc)
      // SignedIntegerType defines init(_: IntMax)
      return T(signed)
  }


> not part of the IntegerType protocol, when it should be

> Submit these as bugs to Apple.

Language designers keep making this mistake. .Net has this problem. Supposedly a bug was submitted for it and it's not possible without breaking some backwards compatibility. Luckily Swift is beta (right?).

Either way, how I solve this "safely" in .Net:

1. Never do bit operations against signed integers. The behavior for this varies wildly across languages, it best to just avoid this altogether.

2. A UInt64 is bit-identical to the Int64 that it was cast from.

I'm guessing at the Swift syntax, but those concepts translate to:

    func integerWithBytes(bytes:[UInt8]) -> UInt64? {
	    if (bytes.count < sizeof(T)) {
		    return nil
	    }
	    for (var j = 0; j < sizeof(T); j++) {
	        i = i | (bytes[j] << (j * 8)) // error!
	    }
        return i
    }
That's right: you simply don't need a generic method.


This is a great pragmatic example.

Swift is a fantastic language with many features that let you write safe and strict code. But do not forget, Swift sits on top of a foundation of C and Objective-C. In my opinion a great Swift programmer knows when to take advantage of that.

And, if written correctly, like the last example above, it is totally possible to write elegant and safe code. In my opinion there is nothing wrong with an approach that uses something like withUnsafeBufferPointer().


> Of course, at that point, why not simply reinterpret the array buffer directly...

... because then the behavior of the code depends on the endianess of the CPU, and thus is a potential future portability gotcha... ? :)


Wow that takes me back. Back to a conference room where we were talking about Integers in Java. If you made it a class, an Integer could carry along all this other stuff about how big it was, what the operators were, etc. But generating code for it was painful because your code had to do all of these checks when 99% of the time you probably just wanted the native integer implementation of the CPU. And Boolean's were they their own type or just a 1 bit Integer? And did that make an enum {foo, bar, baz, bletch, blech, barf, bingo} just a 3 bit integer?

Integers as types can compile quickly, but then you need multiple types to handle the multiple cases. Essentially you have pre-decoded the size by making into a type.

At one point you had class Number, subclasses Real, Cardinal, and Complex, and within those a constructor which defined their precision. But I think everyone agreed it wasn't going to replace Fortran.

The scripting languages get pretty close to making this a non-visible thing, at the cost of some execution speed. Swift took it to an extreme, which I understand, but I probably wouldn't have gone there myself. The old char, short, long types seem so quaint now.


Java's initial approach/implementation is just one way to do that. If you actually use objects to represent your integers, then of course it will have to do extra checks. But Java is not the best example here, because with a better language you can drop all those checks at the compile time and get the "native integer implementation" in the code where it's used.

Even C can do a very basic compile-time generics since C11. There's no runtime overhead at all in that case. (http://www.robertgamble.net/2012/01/c11-generic-selections.h...)


Just because they look like objects to the programmer they don't need to be at implementation level, like the Pascal family of languages.


This is only true until the programmer actually uses them like objects. If the programmer, say, sticks the IntObj into a LinkedList<IntObj>, suddenly that integer is going to need some sort of additional overhead associated with it.


Why?

It is a matter of how a specific language is implemented, if tagged types are used, how generic code is handled and so on.


Because adding/removing an object in a linked list requires some sort of connection between the object and the list (e.g. a "next" field). Regardless whether or not that's abstracted away by the language --- no language can perform magic.

(To pre-empt a possible misunderstanding: I'm talking about adding the integer itself to the list, not a copy of a snapshot of its value.)


What? Are you familiar with pointers? If you want a List<*int>, you can have that. "Next" is part of the list cell, not the contained object.


Yes, I'm a bit familiar with pointers ;)

You can store the "next" in a list cell if you want, but there still has to be some way to figure out, given just the object, which cell it corresponds to. Well, you could traverse the whole list to find it, but I sure hope you understand why that's a bad idea. Sure, compute a hash---but that computation is overhead.

Why do you think Java distinguishes Integer from int? If it were possible to have integers that walked, talked, and quacked like an object, but without any overhead, then we wouldn't use integers. We'd use those things instead.


> Why do you think Java distinguishes Integer from int?

Because the compiler writers didn't want to spend effort using tagged types or compiler optimization techniques.

The original goal was to generate bytecode for simple execution in embedded devices.

.NET for example, does not distinguish between Integer and int. One is the alias for the other.

Eiffel INTEGER is mapped to a plain C int[0], when generating native code.

Smalltalk implementations usually use byte tagging to map primitive objects to register sizes. Described in the blue book.

Ada Integer has quite a few pre-defined attributes and the language allows for additional user defined attributes[0]. Additionally one can specify the amount of bits used for storage.

Any good CS compiler design course would cover such cases in detail.

[0] Which is a pure OO language.

[1] think methods


Compile-time polymorphism has no need to incur the kinds of costs you're hinting at here. Consider e.g. Ada attributes, range types, 'First, 'Last etc.: these things are all resolvable at compile time.


I once, many years ago, wrote something titled "Type Integer Considered Harmful". (This was way back during the 16-32 bit transition). My position was that the user should declare integer types with ranges (as in Pascal and Ada), and it was the compiler's job to insure that intermediate variables must not overflow unless the user-defined ranges would also be violated. Overflowing a user range would be an error. The goal was to get the same answer on all platforms regardless of the underlying architecture.

The main implication is that expressions with more than one operator tend to need larger intermediate temporary variables. (For the following examples, assume all the variables are the same integer type.) For "a = b * c", the expression "b * c" is limited by the size of "a", so you don't need a larger intermediate. But "a = (b * c)/d" requires an temporary big enough to handle "b * c", which may be bigger than "a". Compilers could impose some limit on how big an intermediate they supported.

This hides the underlying machine architecture and makes arithmetic behave consistently. Either you get the right answer numerically, or you get an overflow exception.

Because integer ranges weren't in C/C++, this approach didn't get much traction. Dealing with word size differences became less of an issue when the 24-bit, 36-bit, 48-bit and 60-bit machines died off in favor of the 32/64 bit standard. So this never became necessary. It's still a good way to think about integer arithmetic.


Especially in higher-level languages, I've wished language designers would move toward using variable-size/bignum integers instead of fixed size integers (Python does this, for example). It eliminates overflow, and the need to analyze each int to see if it will overflow the type you're sticking it into.

I wouldn't mind being able to have a "RangedInt<min, max>" type either, in addition. If the bounds are tight enough, the compiler could just use the next-bigger machine integer type (and do bounds-checking, please!). I think a integral type that was always modulo the max would be useful in many applications as well (i.e., unsigned, and overflow is well-defined to wrap, but you explicitly opt-in to this behavior.) You can imagine,

  int: signed, bounded only by memory
  ranged_int<min, max>: integer type capable of holding anything in [min, max].
     Over/underflow is an error (exception? panic? [1])
  modulo_int<min, max>: unsigned, overflow wraps.
     (Mathematicians probably have a better name here… "ring"?)
  "usize" or "size_t": capable of holding any memory address, so useful for indexes.
  native::uint8, native::uint16, etc: whatever your hardware gives you, if you really need it.
The default type a new coder would grab for (int) won't overflow on them, although there are questions about what does some_array[int_index] do, esp. if it overflows the index type.

[1] Rust has some interesting thoughts here, and I thought they did a good job of detailing the consequences and their reasoning; see https://github.com/rust-lang/rfcs/blob/master/text/0560-inte....


If you want wrapping, use the "mod" or "%" operator. The compiler should be made to understand how to generate fast code for idioms such as "n = (n+1)%65536;"


I'd like to have your version of Int but with a twist: Let it be a compilable language and let me have a compiler option to disable all the checks for release version. Having all the bound checks in place will hurt your performance way too much for performance sensitive code - and chances are, if you care about ints vs. floats instead of using a language with a general 'number' type, you do care about performance.


The author of Cap'n Proto recently wrote an article about representing range types via C++ templates to protect against overflow: https://capnproto.org/news/2015-03-02-security-advisory-and-...


For amusement value, the Haskell equivalent is:

  import Data.Bits
  import Data.List(unfoldr)

  f :: (Num a, Bits a) => a -> [a]
  f = unfoldr $ \case
    0 -> Nothing
    n -> Just (n .&. 0xff, n `shift` 8)


That's the inverse of the function the author wants, which ought to have type '[Word8] -> Maybe a' (where 'Nothing' is returned when the bytes exceed the range of the type in question).


You could use the Bits instance of Integer and then convert from Integer to your final type. All types instantiating Num support `fromInteger :: Num a => Integer -> a`. Unfortunately, the design of this interface was for syntactic convenience instead of semantic strength so it is silent on what happens when you project an infinite type like Integer into a finite one like Word8. Implementers might wrap the conversion or fail with some kind of exception. A better (if less convenient) implementation would end in a Maybe type.


My bad. Then I'll leave it as an exercise for the reader; the signature of the correct function is the same.


This is so typical in the Haskell world, comment with a short snippet of code that does not produce the intended result, and when this is pointed out dismiss it as a trivial implementation problem.


Here you go, then:

    -- little-endian sequence of bytes to arbitrary integral type
    integerWithBytes :: (Bits a, Integral a) => [Word8] -> a
    integerWithBytes = foldr (\byte acc -> (acc `shiftL` 8) + fromIntegral byte) 0
At least, I'm pretty sure that's what the article was trying to do...

I don't think this is some amazing showcase of Haskell; after all, the article includes a (working?) implementation in C++. But it does serve as a nice counterexample to the idea -- expressed elsewhere in the thread -- that being generic is necessarily a hugely painful or complicated thing.


Yep this works:

    λ> import Data.Bits
    λ> let integerWithBytes = foldr (\byte acc -> (acc `shiftL` 8) + fromIntegral byte) 0
    λ> integerWithBytes [0xFF, 0xFF, 0xFF, 0xFF]
    4294967295


I feel like you are overstating both how much this happens in Haskell and how exclusive it is to the Haskell community.

Also see bkirwi's response for the working version, then my reply to him for a sample run.


Generic code is like nerd sniping.

I look at this and think "why would you want to write generic code for all those ints?"

The integer types may look similar but they're different in more ways than they're similar. They have different bit sizes, different signedness. The CPU literally has to do different things depending if its `uint8` or `int64`. So why do you want or expect one piece of code that does it all?

It's just so much easier and faster to do it like Go, have non-generic functions that do exactly what you want and as a result, get meaningful work done. It's faster to write (because you don't need to figure out how to do in a generic way), faster and easier to read, and possible to make changes to one func but not others.


Why should I as the programmer have to do different things just because the CPU has to do different things? If the logic of what I want to do is the same in multiple cases, then I only want to write it once and let the compiler figure out what to do each time I call it. (It's the whole reason I write and call functions in the first place, instead of tediously manually inlining the corresponding code at each instance!)


But in this case, it isn't. Two's complement signed fixed-size integers are completely different from their unsigned brethren, and confusing them is an endless source of bugs.


> But in this case, it isn't. Two's complement signed fixed-size integers are completely different from their unsigned brethren, and confusing them is an endless source of bugs.

They both support a meaningful left-bitshift operation, which is what the author wanted to abstract over.


But they do not support a common right-bitshift operation, nor sign extend. Which is what both the compiler and the poster above you are trying to make clear.


Interface providing generic left-shift doesn't have to provide right-shift at the same time. And after the shift, what the compiler is complaining about is not the arithmetics but only type construction.


how would the compiler know what numbers to expect at runtime?


Either by specializing the generic code to the specific types at compile time, or by using dynamic dispatch. (Note that this is the same tradeoff you get if you write the non-generic code by hand, so generics don't create a tradeoff where there wasn't one already.)


Because you're writing code for a computer and sometimes it's expected that you actually care about what the computer is doing with your code.


Again, would you say the same thing about registers versus stack allocation (or heap allocation)?

"The CPU handles it differently" isn't by itself a legitimate argument against abstraction.


This code is converting a bunch of bytes into an integer by bit shifting. So it's specific to the binary representation of ints in your language, and how types of various sizes and signedness interact.

Note that this doesn't depend on your CPU, but rather on how integers and their byte representations are specified in the language.

I think it's quite reasonable to expect the programmer to know what that means assuming they are writing bitfiddling code like this.


None of that argues against abstraction. It's perfectly reasonable to want to write code that uses left bitshift and works on unsigned and signed 8, 16, 32, and 64 bit integers.


> The integer types may look similar but they're different in more ways than they're similar. They have different bit sizes, different signedness. The CPU literally has to do different things depending if its `uint8` or `int64`. So why do you want or expect one piece of code that does it all?

> It's just so much easier and faster to do it like Go

Would you say the same thing about registers versus stack-allocated memory? ISAs treat registers and stack-allocated memory very differently, yet virtually nobody wants the "register" keyword in C back and nobody is asking for a similar keyword in Go. (In fact, Go even abstracts over heap memory in the same way.)

"The CPU does it differently" is not a valid argument per se against abstraction.


You're right. I should rephrase, I meant that in this case I see the abstraction not being worthwhile because it doesn't save enough time compared to not using it.


The cost of the abstraction is language dependent though and the programmer might desire the abstraction to be worth it. For example, this particular abstraction is much cheaper in C++ which is a (admittedly small) point in C++'s favor.


I don't understand why would you advocate almost-copy&paste in situations where you have to deal with types of similar structure. I mean you already use generic ints in almost every programming language out there. You don't write `a int32+ b`, or `a uint8+ b`. It's just `a+b`.

> It's faster to write (because you don't need to figure out how to do in a generic way)

And every time you copy/paste code, it's easier and faster to make stupid copy/paste mistakes either the first time, or when you change one version, but not the others.


To be fair, I believe OCaml does use different infix operators for integral vs. floating-point arithmetic (or did, when last I looked at it years ago).

But yes, the "problem" of just writing generic code involving things that are numbers, and letting your compiler or interpreter figure out the most efficient way to represent them in memory, is not really a problem; it's been solved by plenty of languages.


Because it should be trivial?

    template<typename T>
    T getValue(std::array<char, sizeof (T)> bytes) {
        return *reinterpret_cast<T *>(bytes.data());
    }


Well, it's not really fair to compare an implementation that throws out memory safety to a type-safe language. Swift has pointer casts too.


A memory safe implementation is only a bit longer:

    template<typename T>
    T getValue(std::array<char, sizeof (T)> bytes) {
        T result = 0;
        for (char byte : bytes) {
            result <<= 8;
            result |= byte;
        }
        return result;
    }


The reason this works in C++ is because it doesn't have protocols. The operations that T is required to support in order to be a valid type argument to this function are not expressed symbolically; they need to be documented, otherwise you end up with compiler error messages that quickly get unwieldy in more complex scenarios.

In large part, that's the problem the OP is confronted with. When you make these things symbolic and first-class, unless they're extremely complete, you find holes in the system. And when they're very complete, you find yourself overwhelmed by the number and apparent complexity of what should be simple. There's an inherent conflict.


C++ templates do have "protocols," they just aren't necessary. The result is that the perpetrator of a template error is ambiguous.

check out type traits: http://en.cppreference.com/w/cpp/types and std::enable_if: http://en.cppreference.com/w/cpp/types/enable_if

"concepts lite" is a proposal to add syntactic sugar for type traits as well as enhance them a bit.

in general this is how C++ does things now: first add library-level solutions as far as possible, then add language-level syntactic sugar once the usage and implementation is fully understood.


And note that C++ will someday add concepts, so they clearly want to move in a more Swift-like direction. A more even comparison would compare C++ with concepts to Swift.


    err := binary.Read(bytes.NewReader(b), binary.LittleEndian, &i)


That's comparing Go's library call to a full implementation though.


A memory unsafe version is possible in Swift too. In fact, an implementation of one is included at the top of the article.


reinterpret_cast<>?! that's implementation-defined. not to mention type-aliasing which is undefined behavior. what you want is:

    template<typename T>
    typename std::enable_if<std::is_trivial<T>::value, T>::type
    getValue(std::array<char, sizeof(T)> bytes) {
        T toret;
        std::memcpy(&toret, bytes.data(), sizeof(toret));
        return toret;
    }
there is no-performance hit compared to your function. this is a type-safe and well-defined version of the function above.

btw, was your usage of trivial a pun? if so, that's amazing. we need more type traits puns.


Generics allow writing of algorithms—if you're writing a 'sort this array' function, why do you care whether the elements are integers or strings so long as you can order them.

I do, however, agree with you in general—it seems like he's trying to re-implement generic serialization. Otherwise, it's difficult to see what a 'Bit' (for instance) has in common with an Int and not, for instance, a double.


Generic code is like nerd sniping.

(Remove "like?")

You shouldn't use generics to magically get the compiler to magically do low level things for you. IMO, generics (specifically C++ style generics) should be used to manage things at fairly high levels of abstraction for fairly domain specific stuff. As Swift's Integers are built to prioritize performance and are fairly "close to the metal," this sort of problem is what you'd expect.

If you want to create beautiful edifices of pure mathematics, go write in Haskell. That's not what Swift or its standard library were built for.


> You shouldn't use generics to magically get the compiler to magically do low level things for you.

Why not?

> As Swift's Integers are built to prioritize performance and are fairly "close to the metal," this sort of problem is what you'd expect.

I don't understand how one follows from the other. This is stuff that should be expanded at compilation stage, there's no runtime overhead. Likely that whole function should be specialised for each usage and be small enough to inline it. Since the loop has a known number of iterations, I even wouldn't be completely surprised if it was just a few basic instructions with array check skipped in some cases.


While I agree that other languages can handle things in a cleaner way at the expense of performance, I also think that treating a smartphone with insanely powerful processors as if they are embedded devices is also a bit disappointing. Most of the worlds python code is probably running on commodity hardware with considerably less horsepower than todays smartphones. Even in python's case for instance, if you need to drop into something closer to the metal (most of the time probably unnecessary), you can quite easily.


I also think that treating a smartphone with insanely powerful processors as if they are embedded devices is also a bit disappointing

Computation is power, which is always going to be at a premium on mobiles in a competitive market. But if you want to have beautiful edifices of mathematics in Swift, you can always use Swift to write your own math library, and still get better performance than cpython. Expecting Swift's built in types to trivially be your math library is wishing the design goals were different.


I guess this sort of gets at the crux of the issue: Do you want it to be more like a scripting language (which would basically give you the mathematical equivalent of "integer" including unlimited size) at the cost of speed, or do you want it to be closer to the implementation in the CPU, which entails dealing with 8/16/32/64 bit limits and sign bits?

Why not have a way to do both? You can get an easy-to-use Int when speed is less of a concern, and can deal with Int16's, Int32's, UInt32's and whatnot when the job demands it.


If you have tagged integers you can have 31 bit ints that are super fast (one shift away from the actual number). The performance cost (allocation) is only when it overflows.


I think you all are being too nice to Apple.

I had a similar experience as the blog post author. I spent many hours battling generics and the huge forest of (undocumented) protocols to do something seemingly trivial. I just gave up rather than try to pin down exactly what was wrong in a long and detailed blog post.

The prevailing answer to everything seem to be: Write a bug report to Apple and use Objective-C (or Swift's UnsafePointer and related).

This ignores what I think really is the issue here: Swift has an overly complex type system. This picture:

http://swiftdoc.org/type/Int/hierarchy/

Tells a lot. And this is from unofficial documentation that has been generated from Swift libraries. When you read the documentation Apple provides there is little explanation of this huge protocol hierarchy and the rationale behind it.

It seems to me that: Swift has been released in a rush with bugs even in the core language and compiler. Lacking documentation. And of course even a larger number of bugs in the IDE support, debugging etc.

Secondly: Swift battles the problem of easy-to-understand typesafe generics like so many other languages only it has it much worse: It carries a lot of stuff from Objective-C and it has to support easy interoperability. Plus it has ideas like not allowing implicit conversion of number types (requiring an integer added to a double to be explicitly converted to a double) causing the big type system to show it's messy head again and again.

I really want to love Swift but it will take years for Swift to be as clean and productive as Objective-C.

I my opinion what Apple should have done was to create the "CoffeeScript" of Objective-C. A language that essentially was Objective-C in terms of language features but with a concise syntax.


How does [0xFF, 0xFF, 0xFF, 0xFF], interpreted as a UInt32 turn into 16777215?

I would have guessed 4294967295


I assume it's just a typo/incomplete edit; his graphic directly below has the bit representation 00000000111111111111111111111111 indicating [0x00, 0xFF, 0xFF, 0xFF].



Yep (credit to bkirwi):

    λ> import Data.Bits
    λ> let integerWithBytes = foldr (\byte acc -> (acc `shiftL` 8) + fromIntegral byte) 0
    λ> integerWithBytes [0xFF, 0xFF, 0xFF, 0xFF]
    4294967295


The Swift equivalent of his first `NSData` example is essentially this:

    func integerWithBytes<T:IntegerType>(bytes:[UInt8]) -> T
    {
        let valueFromArrayPointer = { (arrayPointer: UnsafePointer<UInt8>) in
            return unsafeBitCast(arrayPointer, UnsafePointer<T>.self).memory
        }
        return valueFromArrayPointer(bytes)
    }

    let bytes:[UInt8] = [0x00, 0x01, 0x00, 0x00]
    let result: UInt32 = integerWithBytes(bytes)
    assert(result == 256)


"All problems in computer science can be solved by another level of indirection, except of course for the problem of too many indirections." ~David John Wheeler


I wouldn't expect a much better design from a language developed behind closed doors with no community input.

Now all we can do is file bugs against Apple, and hope they improve it; they who chose to release a new language with but three months of public beta. They obviously didn't care much to have their designs tested or incorporate feedback then.


> More or less, what I want to archive can be done with old world NSData: data.getBytes(&i, length: sizeofValue(i))

That doesn't work in C/C++ if you are using a modern optimizer.

C does not have the other Swift issues the author mentions, so shifting into the largest int and casting from there does work.


I'm glad I'm not the only one having this issue.


So no macros? Seriously, isn't that the way to solve this?


No macros in swift




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: