Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Whenever possible, I like using tech that I can mostly understand.

A small simple language is something Swift is not.



Swift aims to be useful even if you don't understand parts of it by the principle of progressive disclosure.


And that is an interesting experiment and perhaps a worthy goal, but having used Swift for a few years now, I don't think they succeeded at this. I'd say it demonstrated that "progressive disclosure" may be a great concept for games, or apps, but there's just way too much difference between someone new to programming and a professional programmer for a language to cover all these cases. Apple doesn't even have a non-pro Final Cut any more.

Even in my first few days, having read Apple's Swift documentation, I had to continuously resort to StackOverflow for answers, which frequently sent me to the language grammar (which had many mistakes in it, and I think still does), and the Swift bug tracker. The inflection point on the learning curve is right after "hello world", which makes for great demos, but that's it.

Personally, I think even Common Lisp does a better job at progressive disclosure. There's huge areas of CL that you can simply ignore. I worked in CL for years before I bothered to learn about conditions, special variables, or 90% of LOOP's features. In Swift, almost right away I had to read everything related to errors. You can't ignore it, except in the simplest program.

I studied generics in college, as part of my data structures class, or maybe my compilers class. I've watched Alexis Gallagher's "PATs" video at least 3 or 4 times. I still don't understand Swift generics. Or maybe I understand what tools are provided, but I don't understand why you'd build (or want) a tool that makes it so hard to, say, define a new type with an "isEqual" method to determine if it's the same as another.

Every Swift programmer hits the "protocol can only be used as a generic constraint" phase pretty quick. My complaint with Swift generics was never that they're too simple. It would be great if this were just a corner that PL/type-system geeks could geek out on, but it's an area that you can't ignore.


Hardly. Trying to read someone else’s code is “smack you in the face with a book” disclosure.

Go/C# pose no issue but things like c++ cause total wtf where other people’s taste/habits become mutually unintelligible dialects.


What’s your ideal simple language?

I’d argue that with type inference, immutability, and non-null as a default, Swift is an easy language for anyone to be productive quite quickly.

    let x = “I’m immutable and not null”

    let favorite = [“Java“, “Perl”, “Swift”].shuffled().first


Why is `shuffled()` a function and `first` a property? Seems weird and not simple.


While you are free to implement functions and properties however you like, usually it's pretty clear which one is which. shuffled performs non-trivial computation and doesn't seem like an intrinsic property of the array, so it's a function; the opposite is true for first.


Also on an immutable array, `first` should always return the same value where `shuffled()` may return different values each time it's called.


If it anything like C#, .shuffled needs to return a new object and has some expense incurred to achieve, so is a function/method, but .first is immediately available, requires only a trivial amount of calculation and does not return a new array/list, so is a property.


shuffled() can accept randomness generator, this form just uses the default. Also .shuffled() returns a new array. For in-place shuffling there is .shuffle(). This pattern is consistent: verb() is for in-place actions, verbed() if for actions returning a new collection e.g. sort()/sorted().

Btw, there is also .first(where:) which accepts a predicate.


Contrast that to Objective C < 2, Ruby, or Smalltalk where everything is a message sent to the object and there are no exposed, public fields.


You're misremembering. Prior to ObjC 2, classes indeed exposed fields. Convention was that you should not touch them and use accessors instead, but they were most definitely there, declared in the `@interface`.

    @interface Foo : NSObject
    {
        NSString * name;
        NSInteger count;
    }

    - (NSString)name;
    - (void)setName:(NSString *)newName;

    - (NSInteger)count;

    @end
(This is still legal, of course, but bad practice now that there's `@property` synthesis as well as ivars being declarable in either an extension or the `@implementation` block.)


A couple reasons, which are used consistently throughout Swift:

Shuffled may accept arguments. First does not

The runtime cost of first is nearly zero, whereas shuffled is doing significant work (constant versus linear time)

It sort of doesn’t matter because autocomplete won’t let you do the wrong thing.


Yeah first being a property is weird to me too.

Although I do believe Swift has "virtual properties" that are really a method under the hood.


The reverse seems weird to me. Things like the first element, or the count of an array seem conceptually more like properties than functions: they're just values derived from the data structure.

The fact that you need to use function call syntax for things like this in other languages seems like it is unnecessarily exposing implementation details to the API because of limitations of the language.


For me it's exactly the other way around.

Having "first" as a property feels like exposing implementation details to me. Sure, a array/vector is a contiguous slice of memory filled with items of a certain size. But the vector/array is is basically a pointer to the start of the memory + a size, so "first" is not a inherent part of the data structure, rather a computed/derived property.

When I see a property I think member of a struct, which is not the case here.

(ps: this is all really just hair splitting, it's a minor difference that you would get used to pretty quickly either way)


It's a reasonable point. But here `first` is actually declared in the `Collection` interface, which `Array` implements. There's no guarantee about what it's actually doing. (Although possibly it has a documented expectation to be O(1), can't remember at the moment.)


Collection's startIndex and subscripting is supposed to be O(1), so I'd assume that first is also supposed to be O(1) since it's trivial to implement it by composing the first two (in fact, this is how Swift implements it by default if you don't override it: https://github.com/apple/swift/blob/e08b2194487d883896a377a0...)


Or maybe they just opt for it for the simplicity sake? Just like said above, the first is just a magic getter which is a function underneath anyway.


But why should I as the user of an interface care whether something is implemented as a function or whether it's just a reference to a pure value?

For instance, let's imagine in some C++-like language I had two list implementations: one static and one dynamic:

    class StaticList {
        ...
        int count;  // set at initialization time
    }

    class DynamicList {
        ...
        int count() {
           ... // compute the count dynamically
           return count;
        }
    }
Here `count` is conceptually the same between these two implementations, but if I want to get that value, I have to access it differently:

    int c1 = staticList.count;
    int c2 = dynamicList.count();
But that difference is essentially an implementation detail which means nothing to the client. The Swift way just lets me express my interface however I decide is most fitting.


There is a useful distinction between "properties" and "functions" that appears in some languages: properties exist in memory and so can be addressed. This is particularly important for "systems"/low-level programming, where addresses are often manipulated directly and need to be stable/controlled.

In languages like C, C++ and Rust, properties are things that are definitely in memory, and functions/methods are things that may not be. If you have an interface like `count` that may be dynamic, it should be uniformly expressed as a method/function (that may have a trivial "return count;" implementation).

On the other, Swift has to put a non-trivial amount of infrastructure into making sure all the things with property syntax can behave as if they are backed by memory, so that an address/pointer can be generated for them (in the general case, a temporary pointer, that becomes invalid quickly).


The reason that performance-oriented languages like C++ and Rust make a distinction there is to make it easier for someone reading the code to understand the performance implications of a piece of code. Accessing a field involves jumping to a statically-known offset, which is a minor and predictable cost, whereas calling a function could have any cost imaginable. It's reasonable for languages that prioritize ergonomics over extreme performance to make the decision to paper over that distinction.


My ideal simple language is Java 1.4 without checked exceptions.


Then you can use Go instead.


Go does not have exceptions.


Sure it does, a primitive version via panic and recover.


That's a poor substitute for exceptions IMO. Also almost all libraries including standard library use another approach via return error objects, so if one would want to use Go with panics as error handling, he would need to rewrite or wrap at least standard library. I would stick with Java at this point. Its library is terrible, but usable in the end.


Certainly not one with optionals. I still don’t see the need for it. I can check for nullability myself when it’s required.

Genetics are indeed useful but such a can of worms that are not worth it IMO


Optionals are like having a strong type system to me: they used to feel superfluous and unnecessary, but once I got used to it, I’m happy to have my compiler ensure that aspect of my code is correct rather than finding out I made a mistake from intermittent runtime errors.


Yeah. I don’t like strongly typed languages.


Most languages have optionals. Swift is one of the few that has guaranteed non-optionals. Java and C(++) have null pointers but no way to syntactically declare something is non null and have the compiler confirm such


That works only until what you can "understand" and "want" changes.

I really like to learn everything about every tech that I use, bordering on obsession, and after a few solid times that I hit roadblocks or had to fight with the tool because it is too "simple", I started to grok the difference of "simple" and "easy".


> the difference of "simple" and "easy"

Don't know if you were already referring to Rich Hickey's talk on this, but if you weren't, it might appeal to you. Simple Made Easy: https://www.infoq.com/presentations/Simple-Made-Easy

"Okay, the other critical thing about simple, as we've just described it, right, is if something is interleaved or not, that's sort of an objective thing. You can probably go and look and see. I don't see any connections. I don't see anywhere where this twist was something else, so simple is actually an objective notion. That's also very important in deciding the difference between simple and easy."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: