The too good to be true alarm bells in my head are ringing. There's almost no way it delivers on all of the features listed here. Gonna need a lot of proof to back all of this up.
I'd say there's also a big difference in the sort of correctness that static types guarantee vs the correctness that tests give you. Tests, to borrow some phrasing from Donald Rumsfeld, are only really effective against "known unknowns" not "unknown unknowns." Tests are only as thorough as the test writer and the "known unknown" cases they can come up with to test against. But as software developers we've all been in the situation where some case that you didn't think to test for is what winds up causing a bug (an "unknown unknown").
Static type systems on the other hand, can give you much greater guarantees of correctness (if they're sound). Due to the Curry-Howard isomorphism we know that programs in sound static type systems are the same as mathematical proofs. That's a much stronger guarantee than what you're given by testing alone. The problem of missing a case in your testing goes away if you have a mathematical proof that that case cannot occur. You've taken away some of the burden of thoroughness from the programmer and given it to the compiler instead.
In a perfect programming utopia we would all encode the desired properties of our programs in static types and have no need for testing because we'd have proofs of correctness. The problem of course is that writing those kinds of proofs into types is very time consuming and tedious and isn't realistic for most software development. So we still need tests to cover what is too expensive or complicated to type.
I used to be a big Haskell programmer. While I still love the language, I've really come around to the idea that strictly evaluated functional languages like F# or OCaml are the best for programming in. You get the nice functional features but don't have the straightjacket of laziness forcing you into certain design decisions. It's really nice sometimes to be able to mix in impure, side effectful code without having to thread it through a monad. The OCaml family feels like a nice blend of good sound functional features with enough escape hatches to be productive in real world programs that might occasionally need arrays or mutable state or IO. Laziness by default also makes things like debugging or reasoning about performance frustratingly difficult. I do miss Haskell's typeclasses in these languages, though. My ideal language is basically Ocaml/F# but with Haskell's syntax and typeclasses.
I use F# daily, and to get around this limitation I use hacky custom code generation an F# script (fsx) to generated specific types.
Additionally, on cases where you’re not specifying the data of the type itself, you can use static type constraints on members. This doesn’t give you a default implementation like Haskell but you can always provide one as a function.
Btw the monadic threading is still very useful, especially when mixing impure code and mutation (when appropriate). The async and result monads, in F# computation expression form, are particularly useful.
For a good example of async + impure code, check out MailboxProcessor, which is part of the standard F# lib. Works similar to CSP/goroutine + channels/actor models; makes parallelized concurrency and message passing easy. You can also make pure mailboxes easily by recursively passing data forward, but sometimes you don’t want to for memory / gc & alloc reasons.
Oh, for sure I still love monads. I miss Haskell's do syntax in every other language. I think it's a great design pattern that starts popping up constantly once you know where to look. My objection is being forced to use monads due to the language's lazy by default semantics. You get this problem with haskell where the IO monad eventually just pollutes a huge chunk of your code because you can't safely sequence side effects without it. Sometimes I just wanted an escape hatch that would let me do side effects that I knew to be safe/harmless. Haskell has unsafePerformIO but it truly is unsafe because you can't guarantee the execution order of your side effects, which makes it useless as an escape hatch for a lot of purposes.
Laziness is a good default for declarative programming. Nix from NixOS is an example of a recent lazy language and benefits greatly from it.
Even in strict languages, lazy behaviour is often added, for example iterators and streams. Mixing effects and mutation with such constructs is also problematic.
I'm not saying it's a replacement for either F# or OCaml, and I've only glanced at it, but F*[1] looks pretty neat. It's higher-level than F# (a lot) and has a syntax more similar to Haskell's. You can even extract programs in F# and OCaml.
My experience with F* is several unsuccessful hours of trying to compile hello world thanks to a complete lack of instructions for configuration and installation.
Revert copyright back to the original 28 years and Disney's increasing domination of the media market becomes less of a concern. Of course this probably won't happen because the US and other world governments exist to serve Disney and their shareholders, but it's the easiest way to fix this that doesn't involve a long and complicated antitrust case that the government might lose in the courts.
If you want to be even more daring and spend a little bit of public money, you could have a publicly financed streaming service with most major public domain works. You could make it available through the Library of Congress.
Not that I necessarily disagree with the point but
"the US and other world governments exist to serve Disney and their shareholders"
Is absurdly hyperbolic language, and makes me want to listen to your actual argument less. It sounds like a college freshman who's just learned how it all "really works".
How else did we get our current extreme copyright lengths? Were regular people advocating for this? Or was it for wealthy corporations and their shareholders? Did anyone responsible for the Great Recession face any legal consequences for destroying our economy and putting people out of their homes? Did people go to jail?
Call it hyperbole if you want, but it's the reality we're living in.
Like a lot of things in politics, it is a question of enthusiasm and motivation. There might not be a single individual person in this country in which reducing copyright protection is their most important issue (and if there is, I think that person has crazy priorities), but strengthening copyright is probably the single most important issue for Disney and other long-established media companies. Having a large group of people who are mildly in favor of something is much less likely to yield political change compared to a small group of people who feel incredibly strongly about that issue. This is the whole reason behind niche lobbying groups ranging from the NRA to the EFF and why those groups are able to have such a large impact on politics relative to their size.
There has been one copyright extension that can reasonably blamed on Disney lobbying.
There was one other that also affected Disney's copyright terms, but I can't find any evidence that Disney did any lobbying for that one, and even if they did it wouldn't have made a difference because that one had near universal support anyway because it was part of a major rewrite of US copyright law that did the most of the heavy lifting to make it feasible for the US to join the Berne Convention.
May I suggest Kanopy, an excellent selection of streaming media free with a library card in the US?
Also, what about separating the delivery network providers from the content providers? Net neutrality would balance a lot of issues and reinvigorate competition and market forces.
In the context of constant growth encoded in the corporate DNA, corporate leadership has no other choice but make existing services more expensive and lobby to monetize things that were free (copyright extension).
The only reasonable way to fight against this is to write laws protecting society from this continuous encroachment.
Your library pays for that, it's not free for any library card, just libraries that pay for the resource. Hoopla is a similar service (with a worse catalog) that your library also may or may not pay for.
This is the central concern. No matter how many platforms there are, there will be no, by definition, competition. The monopoly isn't on the platform, it's on the program. No one wants to subscribe to HBO or Amazon, they want Game of Thrones or Man in the High Castle.
I don't see the public interest angle here. This is merely entertainment. If even entertainment is something the government should be butting into because of unspecified "public interest" considerations, then what is outside the government's role?
Goodfellas, Dances with Wolves, Edward Scissorhands, Total Recall, Ghost, the Hunt for Red October, Back to the Future III, Pretty Woman, and the Godfather Part III were all released in 1990, 29 years ago. These were all created by a group of individuals, and studios spent millions of dollars paying those people to make those movies. The government had nothing to do with creating any of those movies. Why should the government be able to take those movies and release them for free?
> The government had nothing to do with creating any of those movies. Why should the government be able to take those movies and release them for free?
The government wouldn't be "releasing" any of these movies; it'd instead be telling the movies' creators: "You chose to make these movies available to the public in a form that makes them vulnerable to being copied. 'To promote the progress of science and useful arts' [0], we the people are willing to intervene on your behalf — but only for a limited time — to stop others from doing such copying. As to those particular movies, your time is now up; if you want us to intervene again on your behalf, it'll have to be for something new you've created, not for the same old stuff."
[EDIT:] That said, of course, the specific duration that copyrights should have is certainly a valid subject of discussion and debate.
By this logic there should no copyright protection in the first place for entertainment. Why should the government use law enforcement resources to track down people and take away their freedoms because they copied a movie?
Only because the government is already involved, protecting their interests, is there even anything to "take" in the first place.
The government shouldn’t use law enforcement resources to enforce copyright. That doesn’t mean you shouldn’t be able to file a civil suit for infringement. That doesn’t cost the government anything other than the general cost of having courts.
I think the separation between public and private interest may be more blurred thank you think. In an indirect ways, studios were able to spend money paying people to make those movies because the government provides a safe geographical space for that to happen. There's a reason you typically don't see many movies (certainly not entertainment movies) produced in countries ridden by war, for example. So a strong armed forces, internal police, a predictable legal system, etc., all of that is typically provided by a government and allows business, including movie studios, to run.
On a more direct way, I remember watching VHS movies as a kid in the 80s and there was an FBI piracy warning near the start of all tapes, so the studios do leverage the government to protect their business (and I don't have a problem with that in principle, if perhaps I disagree a bit with the extent of the protections).
There seems to be an assumption in this viewpoint that copyright is the state of nature for creative works, and removing copyright protections violates this natural order. This is the opposite of reality, though — copyright is an artificial restriction on expression and dissemination of knowledge, which would otherwise be free and unrestricted.
In other words, if you want to preserve or strengthen copyright, you're arguing for the government to butt in, not against it.
The government actually heavily subsidizes the entertainment industry through tax breaks. In Canada, a huge portion of the economy of Vancouver is in entertainment, which basically exists entirely on the back of tax incentives provided by the government of British Columbia in order to attract outsourcing work from Hollywood. If those tax breaks ended, then the industry would likely vanish overnight to offshoring. Similar situations apply on a lesser scale throughout the US, for example in Louisiana [1].
Local governments give tax breaks to entice movies to be filmed in certain places instead of in other places. Presumably they get their money's worth in terms of local economic activity and tourism. That doesn't give them any claim to the resulting movies. (Governments also use tax breaks to entice major employers like Amazon to locate in one jurisdiction versus the other. Nobody would equate those tax breaks with the government somehow helping Amazon run its service.)
> Presumably they get their money's worth in terms of local economic activity and tourism.
No, they don't. The fact that Vancouver is Hollywood North is industry inside baseball. The public doesn't know or care that the work is all outsourced (which is in fact how Hollywood wants it). The subsidies exist to bring jobs to the region--especially to keep artists employed, which is seen as socially desirable.
> Nobody would equate those tax breaks with the government somehow helping Amazon run its service.
But they do. That's why the entire controversy in NYC existed regarding the tax breaks for "Amazon HQ2", for instance. People were not happy with the government effectively subsidizing such a rich company.
I don't necessarily endorse the grandparent post's proposal--it seems extreme--though I do believe copyright terms are presently too long. However, I do think that the fact that the public subsidizes the entertainment industry is a valid argument in favor of concessions of some kind, for the simple reason that the public deserves its money's worth.
It's worth pointing out that Virginia offered among the most miserly tax breaks in the Amazon HQ2 "competition" (about $500 million when most cities were offering $2-4 billion), and yet Arlington was still one of the locations chosen. The HQ2 "competition" was almost certainly a ruse to get the target jurisdictions, chosen before it ever started, to shell out as much cash as possible, so it's not surprising that there was a massive controversy over the tax breaks.
Almost all creative works make the majority of their money in the first few years from their release, if not the first year itself. Movies, TV, etc. will still be extremely profitable businesses under a shorter copyright. Shorter copyrights also serve to help encourage creators and artists to continue to produce new works for the public to enjoy.
To me Dart is pretty disappointing. Modern languages like Swift and Rust have shown the power of incorporating functional features like sum types and pattern matching. Dart just feels like the same sort of OO language we've been getting since Java became popular.
Edit: Also the continued existence of null in new programming languages is a baffling choice to me.
I don't blame you for being underwhelmed. The original goal was an easy to learn language. But since Dart 2.0 was completed, they are working on adding many other language features, including NNBD (Non-null by default).
It’s just hard to be productive in Dart. The Flutter IDE and simulators were so resource intensive, they literally locked up my laptop the last time I tried using them.
Dart is objectively a terrible language to be productive in for app development. The abstractions provided by the language are clunky to use for window elements on a device screen. Don’t get me started on Material Design, either.
Swift, Objective-C/C++, or even Java is way easier to start building apps with, in comparison. The tooling is mediocre for Dart as well.
> functional features like sum types and pattern matching
"Functional features" means different things to different people. Dart (like most modern languages) has a lot of the core functional features: first-class functions, closures, lambdas, higher-order functions. Our built-in collection libraries are heavily oriented around functional-style transformations. You don't need an external library to map() and filter() your lists to your heart's content.
At the type system level, we also have function types, generic functions, and even first-class generic functions, which is a really unusual, powerful feature.
Sum types are a slightly different beast. Sum types are basically a functional language's answer to subtyping and runtime polymorphism. But object-oriented languages already have full subtyping and polymorphism using classes. There's a sort of zen koan here where algebraic datatypes are a poor man's subclasses and subclasses are a poor man's algebraic datatypes.
The way most multi-paradigm languages like Scala and Kotlin handle this is that sum types are just syntactic sugar for defining a little class hierarchy. Likewise, pattern-matching becomes syntax sugar for instanceof checks and field access. I like that sugar and hope we can add something similar to Dart, but I don't find it's omission to be a profound oversight. It makes some kinds of code nicer, but doesn't significantly affect the expressiveness or capability of the language.
> Dart just feels like the same sort of OO language we've been getting since Java became popular.
Yeah. The original designers of the language designed something very conservative. I think they wanted to make a VM with certain features (single dispatch, static class structure, no static initialization, etc.), and designed the safest language they could come up with to let them do that.
There is a lot of benefit to familiarity. I like classes and C-family syntax, and we see very clearly that Dart is really easy for people to learn and become productive in. We've done user studies where participants have been able to write correct Dart code without knowing what language they were using. It's hard to underestimate the value of that.
But there is also value in providing the modern tools people want in order to write clean, beautiful, correct, maintainable code. Dart has some catching up to do there. We're making a lot of progress. With Dart 2.0, we replaced the old unsound optional type system with a real, modern, expressive, sound static type system. It was a ton of work to do that while dealing with millions of lines of existing code.
We didn't get all the type system features we wanted, but we have a foundation we can build on now. The optional type system had some nice properties, but was effectively a dead end. When your types are optional, you can't hang any language semantics off them. That takes lots of features off the table: implicit conversions, extension methods, etc.
> Also the continued existence of null in new programming languages is a baffling choice to me.
I have always believed [0] that not having non-nullable types was a mistake in Dart 1.0. We are fixing it now:
There's a lot of work to do, but I'm really excited with the design. Unlike many other languages, we have something that becomes fully sound with respect to null errors. This means that once a program is fully migrated, a compiler will be able to take advantage of non-nullable types for performance optimizations. It will be quite a while before we get to the point where we can do this, but it's cool that that's on the table.
> It makes some kinds of code nicer, but doesn't significantly affect the expressiveness or capability of the language.
Sum types and exhaustive pattern matching aren't about expressiveness, they're tools for aiding code comprehension by increasing the locality of code that has no business being distributed into completely different classes, and decreasing the cost of making changes by heavily reducing the amount of test code that needs to be written to make sure a closed set of options is handled appropiately; in OOP languages you can get this by using interfaces but to make use of it you'd have to use huge classes with methods pertaining to every usage of this closed set.
I would never willingly pick up another language that doesn't provide me with them after experiencing the productivity gains. I really like the direction Dart is moving towards and the steps taken demonstrate there's a team behind it that cares about correctness and productivity over being just a familiar Java-like, but this is the one hard blocker for me.
> they're tools for aiding code comprehension by increasing the locality of code that has no business being distributed into completely different classes
That's true for some kinds of code but not others. This is the classic Expression Problem [0]. For some things, it makes sense to keep all of the code for a single operation together. For others, it makes more sense to keep all of the code for a single datatype together. ML-style languages optimize for the former, and object-oriented languages optimize for the latter.
In practice, for the kinds of code Dart is designed for, the latter is a better fit most of the time. There's a reason OO and UI have been married together for decades.
Ideally, a language provides both styles so you can choose the one that fits your problem best. You see that now with languages like Scala. I hope we get there with Dart too.
I don't think it's fair to say that subclassing and method dispatch is objectively wrong just because it's a bad fit for some kinds of code. (Though, naturally, if it's a bad fit for the kind of code you need to write, then an OO language might be an objectively bad choice for you.) Class-based method dispatch is annoying for some things (God knows I've written enough Visitor pattern implementations over object-oriented AST class hierarchies), but it's really beautiful for others.
Being able to define a new widget class that bundles its rendering and interaction behavior together and can seamlessly extend a UI framework is something so natural that we take it for granted, but is very difficult to express in a language like SML. In fact, in order to do it, you'll probably end up doing a "design pattern" that reimplements something like v-tables at the application level.
Not sure if I gave off the wrong impression, but I am not a proponent of having only sum types and discarding hierarchies; as you well mentioned each is well-suited to specific usecases.
I do however disagree on subclassing being the best fit for UI code. "The Elm architecture" as well as the model presented by React functional components with hooks offer sets of tradeoffs that I at least have found are better for most of the development I find myself doing when writing (and particularly when maintaining) regular business frontends.
> There's a reason OO and UI have been married together for decades.
People repeat this a lot, that OO goes with UI, but I don't think it's actually true, and I think e.g. React Hooks and immediate mode GUI in general demonstrate that it's not true. OO has been coupled with UI out of inertia, not because OO has unique strengths when applied to UI.
Sum types are a primitive language feature, akin to product types, which absolutely no one rejects the value of. The sum/product analogy to arithmetic is compelling to me: they are the building blocks of complex types and deserve recognition. I think that "zen koan" you mentioned earlier is being very generous to subclassing. You want ADTs most of the time!
> People repeat this a lot, that OO goes with UI, but I don't think it's actually true
It is evidentially true. Thousands of successful applications and a billions of lines of UI code have been written in object oriented languages. It does work and it can't be that bad if that's continuing to happen even after the emergence of other alternatives.
Whether there are better ways is a good question, but I think it's pretty clear that you can ship good apps using OOP for your UI.
> React Hooks and immediate mode GUI in general demonstrate that it's not true.
I'm far enough into my career now — I've been doing UI programming of one form or another since the 90s — to have seen that pendulum swing several times. If there is a silver bullet, we haven't found it. It's probably not immediate mode GUIs because if it was, I wouldn't have seen game teams tear them out to replace them with something more retained several times in the 2000s.
What I think actually happens is that we forget the problems lurking in the solution we are not currently using. The grass over there gets greener and greener until we hop the fence and the cycle starts over. Incremental progress does happen. (I am not keen to revisit MFC any time soon.) But if a given concept (1) has been around a long time (2) has not already supplanted the alternatives, it's pretty unlikely that it is now an amazing solution today. The only time when that isn't true is when the surrounding technology context has changed since then.
For example, neural nets weren't a good solution for AI problems in the 80s because compute was too expensive and we didn't have a lot of data. Now that CPUs are cheap and everyone puts their entire life on the Internet, machine learning is here.
I haven't seen anything around UIs that to me looks like a significantly changed context, so I think we're still orbiting around retained-mode and immediate-mode as both having their own trade-offs and neither being a slam dunk.
> I think that "zen koan" you mentioned earlier is being very generous to subclassing. You want ADTs most of the time!
I really don't think that's true. Just look out there in the world. More code is written in languages doing subclassing every day than in languages with sum types. Despite the fact that sum types have been around since the 70s. You have to have a very uncharitable opinion of all of your fellow programmers to believe they've all been getting this wrong for decades. Heck, the software you are using right now to read this comment is sitting on a stack of several layers of subclass-based architectures! You've got JS running on top of the DOM inside a browser written in C++.
Sum types are really nice. But open-ended subclassing is too.
I wanna hear more open discussion about immediate & retained mode approach.
For one imgui gains traction again, especially among game developers, simply as it gives a lot for the ease of compile, understanding, compactness of code, etc. You can really build complex tools out of it.
But there is one nasty elephnant - the state, and imgui's approach is to hide it somehow - it used be behind your __LINE__ (or __COUNTER__, stack.line in some langs), or maybe part of your label points to your data, and if you've had the bad luck of having same labeled names, then there is yet another workaround, something special hidden in there.
All in all, it seems like it's missing a language feature, and we are suddenly grasping on all kinds of tweaks to achieve that.
That, .. and layout.. Layout is damn hard in immediate language. It works by magic, and then your app might crash, and lock. No I'm not kiddin...
Then again I'm but a simple user of UI toolkits, never fully written one.
First, I just want to say that I use Dart pretty regularly now and enjoy the language. I don't want to come across as overly negative; Dart is a solid language that's made web programming a lot more fun for me. So thank you, and everyone else on the Dart team for the hard work!
I just want to address these lines:
> Thousands of successful applications and a billions of lines of UI code have been written in object oriented languages.
> More code is written in languages doing subclassing every day than in languages with sum types. Despite the fact that sum types have been around since the 70s. You have to have a very uncharitable opinion of all of your fellow programmers to believe they've all been getting this wrong for decades.
I agree that there's a huge amount of code out there using subclassing and not sum types. I'm not disputing the utility of subclassing; I just think the analogy to arithmetic is compelling, in that a closed sum type is a more primitive notion than open ended subclassing. It's easier to describe what a sum type is than what a subclass is; sum types have a smaller impact on a type system than subclassing. Pretty much any metric you can think of, sum types are just simpler, and more widely applicable. Any time you are describing a data structure, an ADT is immediately useful; subtyping may or may not be useful and is always more complicated. It's very difficult for me to understand how anyone could possibly say subtyping is on the same level as a basic operation like addition. Subclassing may or may not be nice but sum types are a primitive in a way that subclassing simply cannot be.
I don't know how to break it down more than this: we already have multiplication of types, and everyone accepts this as a primitive. Well, you can also do addition of types! Multiplication, addition, a neat little pair, just like algebra class[0]. Subclassing is way more complicated than this. That's it, that's a bullet proof argument as far as I'm concerned.
I suppose I do have an uncharitable opinion of mainstream programming languages, because I do think they've been getting this wrong for decades. It's nothing personal, it's just that industry has other concerns besides how clean their languages are. My browser being written in C++ is not an argument in favor of subclassing, though. You can build anything out of toothpicks if you're paid enough.
> I just think the analogy to arithmetic is compelling, in that a closed sum type is a more primitive notion than open ending subclassing.
I agree, sum types have a real beautiful elegance. But I often wonder if that's some sort of "appeal to mathematical aesthetics" fallacy. When I see, for example, painters deciding what brushes to use, I don't see them choosing brushes whose diameter follows the Fibonacci sequence or something.
Simplicity is a virtue because it lowers the cognitive load of a language. I don't know if mapping something to arithmetic tells us something actually profound about the productivity of a language feature, even if it gives me a little shiver of delight when I think about it.
> Any time you are describing a data structure, an ADT is immediately useful
For what it's worth, I often run into problems where I think I can map something to a nice set of ADTs but then it ends up still having ugly corners. As elegant as the language feels, when I use them in practice my code is still awkward sometimes.
> It's very difficult for me to understand how anyone could possibly say subtyping is on the same level as basic operation like addition, one of them is clearly a more basic idea.
Subtyping is set theory, and sets are obviously more fundamental than arithmetic! :D
> we already have multiplication of types, and everyone accepts this as a primitive.
Well, actually, lots of languages don't have tuples and records/structs aren't simple product types.
> I suppose I do have an uncharitable opinion of mainstream programming languages
I wasn't talking about languages I was talking about people. There are languages out there with all of the features you describe. Yet millions of people are choosing other languages. You must have an uncharitable view of those people if you presume that all of them are making a choice that goes against their own self-interest to be happy productive programmers.
> Subtyping is set theory, and sets are obviously more fundamental than arithmetic! :D
In case this wasn't clear, the "arithmetic" in question is being performed on sets (or types). For example, for sum types, the number of inhabitants of the type is the sum of the inhabitants of the components. All three of addition, multiplication and subtyping are operations on sets (types). So this was a strange thing to say.
> Simplicity is a virtue because it lowers the cognitive load of a language. I don't know if mapping something to arithmetic tells us something actually profound about the productivity of a language feature, even if it gives me a little shiver of delight when I think about it.
My real point is this: if you are deciding to build a language, as far as I'm concerned, it is very strange to add multiplication (why don't structs count as multiplication? I would say they do), then not add addition and instead add something much more complicated than addition. It just doesn't make any sense. It's not about the "productivity" (how do we measure this?) of the feature, it's about it not making any sense to do this! You add multiplication, you add addition, that's really all there is to it. I don't know, I guess I'm just repeating myself now and you won't find it convincing, but to me it's like trying to defend Roman numerals after being shown the Arabic system.
> You must have an uncharitable view of those people if you presume that all of them are making a choice that goes against their own self-interest to be happy productive programmers.
I don't think so at all. It's not really a fair choice; people choose languages in order to build things, and it's easier to build things in languages that other people are using. The full range of options is not obvious to every programmer (does the average webdev even know about ML?), and most of the time there are more important concerns than whether your language has a nice theory behind it.
One nice think about Dart I've found is that compile times are very fast (faster than Go) and when editing in VSCode the code intelligence is instant and 100% accurate in my experience.
Compare that to Rust where compilation speed is almost as bad as C++, RLS is slower than e.g. Qt Creator's clang lints, and has auto-completions so innaccurate that they are almost worse than nothing.
Buuut.... I wanted to make a deep copy of an object (a map of maps) in Dart, and one of the suggestions on Stackoverflow is to serialise it to JSON and then deserialise it. Eek. In C++ you just do `auto a = b;`. In Rust `let a = b.clone();`. How do they leave out such basic functionality?
Sure, but does it do what you want? :) If that map contains pointers or other types with "interesting" assignment semantics, then your idea of a deep copy and that author of that type's idea may not be the same. Cloning is a surprisingly hard problem.
The two languages you compare to don't have GCs and prefer value semantics. But in most GC languages, everything is by reference and "copying the bits" isn't as meaningful of an operation.
I looked up how to deep clone maps in some other GC languages:
Python: Import a separate "copy" module. May have to implement some custom methods if you use user-defined objects in the map. The module isn't thread-safe. A comment recommends converting to JSON and back.
As someone who uses dart everyday - it's quite nice language. All I'm missing is case / data classes, non-nullable types and pattern matching, but they're changing the language and all of those will eventually come. For now, I will use built_value and built_collections ;)
> your idea of a deep copy and that author of that type's idea may not be the same
Definitely true that they may have done something weird, but I think the idea of a deep copy is pretty easy to define: It should not be possible to modify the original object using only the deep copy. (Note that this doesn't preclude copy-on-write, etc.)
I think it is quite weird that so few languages (even GC'd ones) provide a proper solution for this. It's clearly a thing people want to do.
You would still have null, but only when asked for and checked explicitly.
In Kotlin, for example, you mark something with a question mark to say that it can be null, which forces you to check for null before using it:
val entityOrPossiblyNull: Entity? = Entity.fetch(id)
if (entityOrPossiblyNull == null) {
doSomething()
}
else {
// The compiler knows that the variable is not null in this branch,
// so this assignment is OK.
val entityForSure: Entity = entityOrPossiblyNull
doSomethingWithEntity(entityForSure)
}
The real problem isn't null itself, it's that in most languages that have it, null inhabits all types (or at least all reference types).
Different languages solve this problem in different ways. Many languages get rid of null entirely, and use option types in its place. Other languages, like Kotlin, fix it in the type system, by differentiating between e.g. String and nullable String (spelled String? in Kotlin) .
Very skeptical that this is due to Amazon's generosity and not just because they can tap into an underpaid and underemployed workforce. They don't seem to be giving these people access to the jobs that actually make money at Amazon. I'm very wary of feel good stories about employing people with disabilities in low paying jobs with lots of repetitive manual labor. Often it's a way to get cheaper and more easily exploited labor that you can get your PR team to spin as charity.
That's good though isn't it? The more employers start seeking competitive advantage in hiring disabled people, the narrower the pay and employment gap will become.
Not necessarily. Prison labour, sweatshops, and undocumented workers are all types of "competitive advantages" as well.
A company that has a history of treating folks well? Yeah, this might be their advantage. But Amazon has stories of employees not being able to use the toilet while on the clock. I'm just hoping they at least pay folks the same.
It's more likely that they use it to depress the wages of abled workers. This sort of unskilled labor isn't exactly scarce enough to cause meaningful competitive pressure like you describe.
The US is the wealthiest country in the history of the world. If we want to make providing disabled people with fulfilling jobs that pay a good wage a priority, we are entirely capable of accomplishing that.
This statement is so full of naivety. The type of stuff I might have thought when I was 16 and didn't know how a country becomes (and stays) wealthy in the first place.
TLDR; linked lists' theoretical performance advantages are often negated by years of hardware optimization for dealing with arrays.