Hacker News new | past | comments | ask | show | jobs | submit | sunflowerdeath's comments login

Just tried it on some random GoPro footage and was very impressed! First - by the quality of stabilization. It is if not better then at the same level as in Gopro itself (which is considered having best e-stabilisation on the market). And second, that now I can shoot with disabled stabilization and choose amount of crop in post. I know that Sony offers similar thing with Catalyst Browse, but it is very unconvenient to use additional application to render stabilized footage, here I can just use plugin for Davinci and have seamless workflow.


I saw arguments against ORMs a lot of times, but what about the opposite - you might not need raw SQL and ORM might work just fine?


No, I'd rather spend my time doing something useful than looking for replacements for words.


For almost as long as I've been programming, I find Ocaml a fantastic language, which just lacked some development tools and improved concurrency. I still believe that under the right circumstances it could become a mainstream language.


I like the language, but I'm not sure that it actually targets any mainstream application. If you look at something like golang, it's very geared towards online server work bc that's what the language maintainers wrote it for, hence a lot of focus on GC-induced tail latency. Python and its package distribution system are great for glue code that's going to pick from a wide library of modules. C++ is amazing for portability and performance / zero cost abstractions. Java has its weird cult in enterpriseware and also is an OK glue layer for mobile UI stuff.

Comparatively, OCaml was initially written with symbolic operations in mind, and thus I personally think it's a great language for writing parsers, compilers, modeling systems etc. The type system makes it easy to avoid certain kinds of common errors in those programs and the tail latency of GC operations is typically not as important for those applications as overall throughput. I don't think that heavy symbolic computations are a mainstream target though. To be clear I think that's absolutely OK for a language not to aim for mainstream use; but if one wishes for mainstream use, then it's important to be clear about the kinds of use-cases within mainstream that will be targeted.

It has been a while since I played with the language though, so maybe things have changed since. Happy to learn more if I'm wrong and OCaml now has a clear mainstream target use case in mind.


Why does OCaml need a 'clear mainstream target use case in mind'? It's a general-purpose language. It can be used for anything that Java, Go, or Python can be used for, and many things that C++ can be used for to boot.

Sure, OCaml's heritage is descended from a theorem prover helper language, but that doesn't meant it's forever stuck at that use case. Python was originally an educational teaching language, nowadays it's a data science and glue language. None of that was planned out in advance by the Python folks.

In fact even if you ask the Go team, Go is arguably the most targeted language of the ones you listed, in terms of its use case--and they were surprised that a lot of their userbase came from ex-Python devs who wanted something more reliable and efficient!

OCaml with version 5 is just hitting its stride. With the advances in the language, the multicore support, and tooling, it's going to be competitive with Rust for many use cases. It's worth the wait to see where it goes.


I think to gain adoption any general purpose language needs a clear niche to gain a foothold. If you are choosing a tech stack then choosing a language that is not already widely adopted is always a risk. The tooling is usually inferior, there are not necessarily high-quality libraries in domains you care about and you pay a cost in training new employees in a new tech stack.

Java was an OO, garbage collected language that supported (sort of...) hot swapping code. Go was a reasonably performant, garbage collected language that compiled to static binaries and has great cross-platform support. Some grad student decided to write Spark in Scala so you had to use it for Spark jobs Rust is a systems programming language with zero-cost abstractions and no GC that allow you to write mostly memory-safe code.

What is the killer feature of OCaml that makes it worth the investment?


OCaml has many (almost all) of the advantages that you attributed to the other languages, and adds many more besides. What makes them 'killer features' for those languages but not for OCaml?


Nothing really, but they are mainstream languages and OCaml is (currently) not. So if you really need one of those then there is already a more widely adopted option that gives it to you. It's not that OCaml is bad (I like it a lot) it's that to adopt it you need a really compelling reason to do so.


Not saying it needs to be planned in advance, but that there needs to be a very strong competitive advantage in some use case, not just checking the feature boxes. Continuing on the case of Python users moving to Go, they're probably not doing so for interactive use-cases or avec to scientific libraries...

Does checking the concurrency box make OCaml an awesome choice for some use case?

You mention Rust for example but that language is sort of built around a zero cost abstraction principle, which is very much not the case of OCaml, so I still think it's going to be viewed as risky for highly performance sensitive work.


Yeah I mentioned Rust because it's muscling its way in to many use cases which don't really need 'zero-cost abstractions' and can definitely afford a GC, e.g. say microservices. A lot of work has been (and continues to be) put into OCaml to make it very competitive in these areas.

For truly performance-sensitive work of course people are going to choose C/C++/Ada/Rust.


If I have learned something is that software, not the language, sells the language.

Mainstream languages have killer software (Spring, Synfony, Pandas, whatever) which then prompts adoption.

Languages for mainstream programming are less of an issue than the software.

Haskell, CommonLisp, etc, plenty of great languages lack that killer software or clear benefits over other mainstream languages.


I remember Python being already popular in 2000 as a scripting and glue language, a Perl replacement for people who recoiled from Perl, or whose coworkers rejected Perl. Then Eric Raymond wrote a glowing article about his first experience with Python[0], which brought it even more attention. Pandas didn't exist until much later.

Same thing for Spring. Spring came about as an attempt to make J2EE more usable, which is to say, it didn't exist until years after Java was already being widely adopted in the industry.

Checking on Symfony since I'd never heard of it, it was first released in 2005, which was after PHP had already become popular enough to become the archetypal "bad language" in the minds of people who had never used it.

I think what you're seeing is that the ecosystem that develops around languages reflects their popularity and the interests of their users. As a readable, low-complexity glue language, Python became popular with scientists and then (unsurprisingly in retrospect) people started using it for data analysis. Java became popular with big iron enterprise programmers, and so it sprouted enterprise frameworks where half your code was in XML. PHP was popular for small web sites, so it spawned a whole menagerie of web programming frameworks. A language without any ecosystem growing around it is probably a language that doesn't get used much.

For a more contemporary example of language adoption, look at Rust. The appetite for the language existed from the get-go based on its design and stated goals. There was a lot of excitement, even from people who said they couldn't adopt it yet because of the lack of ecosystem around it. As the ecosystem developed, more and more people adopted it the moment it was practical, establishing it as a mainstream, growing language prior to the existence of any killer software. In the future maybe there will be a killer Rust framework that brings in a lot of people with no interest in Rust the language, but Rust is thriving despite that killer app not existing yet.

[0] https://www.linuxjournal.com/article/3882


What you said is correct. But then for someone to build the "killer software", they need to be attracted to the language in the first place. The language needs to excite the programmer behind a (future) breakout project.

Sometimes the language is chosen by luck (e.g. company requires it) but many times a programmer will make an explicit choice to use that language.


If this were really true though, then wouldn't F# be bigger than it is?


F# suffers from being a stepchild in what concerns .NET product management.

They accepted to place it on the box, but never gave it the same love as VB and C#, or even C++/CLI, and now they could rename the Common Language Runtime into C# Language Runtime, given how little outside they give to anything not C# on newer workloads.

Had F# been given feature parity with C# and VB on Visual Studio tooling and core frameworks, the adoption scenario would be much different.


F# is great and it benefits from the .net ecosystem but at the same time the language sometimes feels like it is being held back now by not wanting to add new features that c# doesn't support, so its relation to .net is kind of both a blessing and a curse.

Previously it added a lot of stuff on its own like async, etc. which was cool but also resulted in compatibility issues when c# later added similar features.

Now the f# developers are very concerned with compatibility but it basically means that f# can't get new features until c# already has them. It's also limited by the runtime which is designed around c#.

For example, it doesn't support type classes because (among other reasons) that might end up being incompatible with future c# type-class like features several years down the line.

It's also hard to learn f# unless you already know some ocaml/haskell.

ocaml has failed to catch on that much so far but I think it does have potential, and adding multicore support/effects is pretty promising.

On the other hand the fragmentation with stuff like reason/rescript is pretty dumb.


From a pure pro-ocaml perspective I don't really appreciate rescript per se. But it truly does seemed to have raised awareness of the language among web devs who otherwise don't have a reason to worry about much outside of js/ts. The timing is also good because typescript has directly shown devs the value of types, and taught them enough about them to understand what ocaml is really offering above ts-style static type checking.

That's my impression from my last two typescript jobs, anyway. I don't know what, if anything, will come out of this, but I can't see it being bad.


I am curious. What does Rescript have that Reason did not have ?

I know the syntax is now incompatible syntax with OCaml so I see the broken eggs but I don't see the omelet.


I don't think it's that different at the moment but they apparently wanted to not be constrained by reason/ocaml compatibility going forward.

Reason seems to be pretty dead so now the relationship between ocaml and rescript is kind of like that between haskell and purescript.

This may be good if you only want to run it on the frontend but it's not as good if you're using ocaml on the backend and want to share code, but I guess there's js_of_ocaml

I also don't think the existence of rescript is in itself a bad thing, it's just that the whole confusion around reason/rescript may have harmed development of ocaml for a while.


> whole confusion around reason/rescript may have harmed development of ocaml for a while

OCaml is an anchor language for many important projects (e.g. Coq) and companies (INRIA, Janestreet etc). The momentum behind OCaml has grown and I don't think the rescript/reason separation affected it _that_ much.

But I am curious to know if rescript/reason issue harmed rescript. While rescript gained a lot of technical freedom from the breakaway from OCaml it lost some people along the way. Was the breakaway worth it in retrospect?


> if rescript/reason issue harmed rescript.

Personal anecdote: yes, it has.

I was very interested in Reason when it appeared, and it seemed to have immense momentum: exploring arguably better (or more familiar) syntax, tool integrations etc. I know that people ran regular OCaml workflows/projects with it.

And then the whole split happened ... why? "We don't want to be constrained by OCaml" while keeping all of OCaml's syntactic idiosyncrasies among other things doesn't sound like a proper, well, reason.

This is where I stopped being interested (and as I imagine, many people stopped, too). Because a slit in a niche miniature language (which it was at the time) means only one thing: not enough resources to continue with either one.

It doesn't help that the whole split was confusing to everyone. Good description here: https://ersin-akinci.medium.com/confused-about-rescript-resc...


I too lost interest in Reason/Rescript after Rescript became its own thing.

A lot of talent has moved onto other things (or stayed with Reason) with arguably minor gains for Rescript in terms of technical freedom gained.

Typescript is so dominant in this space that it really didn't make sense to split an already small community.


Oh I have no idea I didn't hear about any of these until after the schism or whatever, and haven't bothered to understand the differences and relationships. I use ocaml and hear js/ts people talk about rescript sometimes so I looked into it instead of the others.


Exactly. Functional programming has been around for quite a while, it just recently became the FAD of the day, just search youtube for "functional programming in javascript". This will fade away in just a couple of years.

F# is an interesting language and yes, I used it at work a couple of times. However finding a programmer who is into that is hard and the HR "partners" will kill you because people like these can't be treated as cheap replaceable resources. So it was F# prototype -> C# production. Attempts to introduce a new language or a tech stack usually failed because the decision makers (managers) were concerned about "how many people can I hire within a month" than anything else - nobody was fired for picking Java, C# or JavaScript (TypeScript) for the next project and staffing accordingly.

Languages like OCaml, Haskell, Lisp (whatever flavour you pick) or Prolog will never become mainstream. Should they even? My favourite is one of them for my general hacking or research projects; not sure I'd like to use it in a corporate job (which I have right now). Small efficient team in a tech start-up? Hell yeah! Mainstream mundane programming? OMG NO! Horses for courses. Hearses are not mainstream vehicles yet all of us will need a ride in one occasionally. Does it mean they should become popular and mainstream? :-D


Functional programming is not just the FAD of the day. This is evident by the strong presence of functional programming in Rust, an imperative language. Functional programming is also the foundational idea behind React. As with most things in life, extreme ends of the spectrum rarely pan out. Pure functional programming languages are harder to work with but functional programming has a ton of merit. It's here to stay.

Regarding Ocaml, the functional aspect of the language is not hard. I have trained several junior programmers to write code in it (ReasonML). It is not a pure functional language. The biggest challenge for most people is dealing with types.

Ocaml's standard library is a huge sore point. It also lacks a lot of proper tooling. The biggest problem with Ocaml/ReasonML is that they are unable to rally everyone to a unifying vision to gain traction.


I dare to disagree - I still think that functional programming is a FAD of the day right now. Like object oriented was the FAD of the day in the 1990s and many then procedural languages saw OOP extensions on top of them (Perl where you need a bless() to work with an object? :-D ) and some even were like "yeah everything is an object so let's wrap every procedure into a class and use Command Pattern instead of closures and build a Kingdom of Nouns where people will name their classes after design patterns and everything will be perfect with a cup of hot Java").

Nowadays C++ has closures. What will become the FAD of the day in 10 years? Will wee see embedded Prolog in C# 12 or C++31? Who knows? Like OOP has been with us since 1968? and hasn't disappeared anywhere, functional programming features will not disappear from mainstream languages. But the "cool new shiny silver bullet" will be something else. Like Simula-68 and Smalltalk-80 paved way for OOP, Haskell and OCaml (et. al.) have been paving way for functional programming and Prolog and Datalog have been paving way for logical programming. They won't become mainstream when logical programming becomes the FAD of the day.

BTW you mentioned you taught a couple of junior programmers an ML-ish language. That's awesome! We all (programming community) need more legends like you.


Just a heads up: the word "fad" isn't an acronym, there's no need to capitalize it (and it looks strange to) the way both you and the person you're replying to are.


Thanks. English is not my native language.


>fad

By definition, fad is an intense and widely shared enthusiasm for something, especially one that is short-lived; a craze. I don't see OOP enthusiasm to be short lived.

Many are still living in the Kingdom of Nouns, are follow Uncle Bob's preaching with due diligence and judge you harshly if you don't make everything a class and don't use lots of design patterns even if they are not needed.


> [...] and some even were like "yeah everything is an object so let's wrap every procedure into a class and use Command Pattern instead of closures and build a Kingdom of Nouns where people will name their classes after design patterns and everything will be perfect with a cup of hot Java").

Without verdict about what else you wrote: What a fitting description, "kingdom of nouns". I'll steal that for my next anti-mainstream-OOP-everything-must-be-a-class-rant, if you do not mind.


I believe that expression originates from this blog post: http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...


When you say "fad", do you also mean over-hyped?


In this case, yes.


> Ocaml's standard library is a huge sore point.

I think this is a bit overstated. It's roughly the same size of JavaScript, missing regexes but having some stuff that JS lacks. And in lots of languages with large standard libraries, people tend to use standard library replacements. Considering that htere are also not a lot of people working on OCaml, the core team time is better spent on stuff like multicore rather than HTTP servers and clients.

The standard library is also open to contributions. For example, in OCaml 4.14, 44 functions were added in the module Seq https://v2.ocaml.org/api/Seq.html. String also had a lot of utility functions for integer decoding in 4.13 https://v2.ocaml.org/api/String.html.


Nit : From what I remember and most unintuitively Reason is the language/alternative Ocaml syntax and ReasonML is its umbrella project.


The alternative Base/Core from Jane Streets looks good though. Of course it splits the community but the work done on it is very good.


They’re great, but Base is new(ish) and not complete and Core is Linux only (maybe it work on OS X too?)


The Unix bits were broken out of core earlier this year, it’s now multi-platform. https://discuss.ocaml.org/t/ann-v0-15-release-of-jane-street...


That’s great to hear, I missed that update (still on 0.14).


It's not about functional programming, it's about managing state. And FP is good at managing state, so no it's not going away, specially with multiple cores and distributed systems. Even databases are learning from the patterns seen in FP.


But wait, OCaml is not just a functional language, even the "O" in its name stands for "Objective".


That doesn’t explain why languages like Go, Kotlin or Rust became mainstream, and why languages like OCaml or F# didn’t. This is the real question. Is it because most programmers prefer languages that are mainly imperative instead of functional? Is it because of where those languages originated? Is it because of the runtime, the tools, the documentation and/or the standard library, and not the language itself?


I think its because influential companies in the tech space pushed them. For instance, Go is backed by Google, and they made alot of rounds evangelizing Go.

Kotlin is a Jetbrains project and it certainly didn't hurt that Google made Kotlin the preferred language over Java for development on Android, and that Kotlin itself has great Java ecosystem compatibility.

For years, Rust was Mozilla's baby, and they did a lot of good work evangelizing Rust for its use case, and other companies adopted it as well, continuing the cycle.

There is nothing I can think of that is comparable for F#, OCaml, Haskell and many other fine languages


The corporate backing is a good working hypothesis for C, C++, Java, C#, Go, Kotlin, Swift, Dart, JavaScript, TypeScript and Rust. But it doesn't explain how PHP, Python and Ruby became mainstream. Maybe those ones are just outliers, products of a very special period of time when "dynamic" programming languages were popular (and it's easier for a small team to develop such a language) and when the web was growing very fast (and many devs were bored with the "ceremony" of "static" programming languages like Java).


They were first round languages. When Python came out, it was largely a C / C++ driven world (Python predates Java by 3 years). It's original use case was around scripting C / C++ programs. If I recall correctly one of the biggest use cases for Python early on was doing test harnesses for C++ codebases in particular. Python's ability to basically be "wrapper for C / C++ libraries" is still its main strength, though clearly the language evolved since this time period. Pascal was a big player too back then, and has fallen quite substantially from developer mind-share. (in the 1980s, predating this era, there was a lot of work and energy around Objective Pascal and getting that on every platform they could. It almost worked. This is what Embarcadero[0] is all about via Delphi[1])

PHP was "easier" Perl. Perl was also ubiquitous at one time because of CGI and the web. PHP was one of the first languages to have the same ease of deployment story as Perl but a much easier syntax to deal with. Again, language space wasn't as crowded then. Certainly both were easier to use than C / C++.

It was easier to gain a foothold in the first round (web 1.0 era if you will) of software and development. There was simply less competition. Java didn't even come out till 1996.

The second era of languages, I think really starting with Go and continuing through to Rust, they faced a bigger uphill battle. There simply isn't the mindshare to capture as easily as there was back then.

Admittedly, I'm still not sure to this day why Ruby took off, other than it offered a great developer first experience as a language, from what I understand. It seems people who use Ruby tend to really like it. It seems it has a stickiness there that other languages don't, maybe? I've always been confused by this one, and I admit that's due to personal bias of not enjoying its syntax at all, however there are clearly lots of people who do. I just prefer one true way of doing things.

[0]: https://www.embarcadero.com/

[1]: https://www.embarcadero.com/products/delphi/object-pascal-ha...


i think ruby took off because it was a better perl at a time people were really wanting one. python didn't really fill that gap because it did not embrace perl's shell scripting and quick text processing aspects, whereas ruby did, as well as providing a much cleaner and more consistent language. also the developer experience really is wonderful.


Python became mainstream thanks to be a saner Perl replacement and Zope.

CERN and Fermilab were already playing with Python as administration tool and data analysis during the early 2000's.

PHP became mainstream thanks to being a better Perl for Web applications, and the only option in many ISPs.

Ruby has Rails to thank for its adoption, a product produced by Basecamp.


PHP I feel like was mostly all the old web providers you could sign up for to run your own site always had LAMP stacks available as options either default installed or trivially installable, often with instructions.

Ruby I'm not sure what caused the hype unless it was something like Heroku making deployable Rails easy.


Didn't python become mainstream until numpy/pandas? In my mind, data science was Python's killer app, but maybe that's just my revisionist brain. Similarly, I think Ruby had Rails as its vehicle.


The vast majority of startups in the previous decade were built on Python web frameworks

Youtube, Instagram, Pinterest, Reddit etc.


Rust also brought something completely novel to the mainstream, so I wouldn’t put them next to Go and Kotlin, which are at most different combinations of existing features.


I'm pretty confident we will find a fairly direct correlation between how mainstream the language has become and how much money was poured into its development (i.e. headcount of engineers working on the project and their salary levels).

Now I know correlation does not equal causation. But you have to think it must have some effect when Google is paying its Distinguished Engineers--the likes of Rob Pike, Ken Thompson--plus at least several more engineers, project managers, and others. All to work on Go. Can you imagine what the OCaml ecosystem could do with that kind of money and dedicated engineering talent? Heck, any technology for that matter.


F# is a fantastic language! However, its official documentation is a bit lacking (like the C# ones). Also having developed some reactive applications I'd say .NET core needs some improvements before people can have fun developing desktop or web apps with it


F# can't become big for the same reason C# isn't bigger and Swift probably won't be bigger either. It's locked into an OS-specific platform and very specific non-free dev tools. (Even if .net is open now, it still took too long to be available on Linux, and now it's reputation is as a Windows only platform).


F# isn't bigger because it's a functional language and everyone starts learning programming with a C based language.


> everyone starts learning programming with a C-based language

Idk about that, especially in the past decade. Python seems to be the most popular language that people pick up on their own as their first one.

As for the intro college courses for programming, they mostly seem to be Python or Java, with occasional C (havent seen that one myself yet for an intro class), and one incidence of OCaml (apparently Cornell teach their intro to CS in it).



I think it could've been huge with Reason, but they decided to split the language, the development, and the community apart.


Was used to bootstrap Rust. Others clearly also think it's a great language!


And I want IPad Mini-sized Android tablet, but for some reason all compact tablets have terrible performance, and because of this you have to use a large phone to watch videos or play games.


I use it and will continue to use it because it most accurately reflects my intentions for my software. If some organization can't comply with it, it's worse for them. They should change, not me.


> They should change, not me.

OK, they should change. Great. They won't, though, so your code won't be usable by them. If that's not a problem, then your code was never really relevant to this discussion. If that is a problem, then shoulds and woulds ain't gonna help.


The purpose of freely distributing code is not to be useful to corporations. It's to foster a spirit of sharing with the community.

If I put something up for free, it's because I want other programmers to be able to freely use it. I don't care if a big corporation with expensive lawyers likes the way the license is worded.


But if your license is stupid, then other programmers can not use it. Congratulations.

There is really no excuse for not at some point in your entire programming life, taking the time to go over these things, figure out what they end up meaning in real life, and thereafter knowing which ones actually do represent your wishes and intents. Picking one whos only property is it's so short it doesn't actually do anything hardly counts.

It doesn't really matter how big and complicated a license is any more than how big and complicated a compiled executable is or all the parts in a car. What matters is does it do the job that needs doing. All else being equal, smaller and more elegant is better as a direction or principle of course, but the reason we even have writing is to assemble work into a reusable package, so that time-consuming work like writing a book only has to be done once, and then everyone else gets to use the big complicated work many times over without having to re-create it each time.

It's an efficiency and a power amplifier to be able to pack up a bunch of complicated things into a writing and then treat the writing as a single simple thing.

You happly use gcc (or your car, or whatever) many times every day. You did the work one time to figure out that the tool you need is gcc, and after that, all you mentally think about is just "gcc" not the thousands of lines of code or the millions of machine instructions that make it up every time you use it. If I write a new "The Un-cc" that only has about 8 lines of code and is oh so refreshingly simple to understand, one would hope that you would not use it.

The point of the established and thorough licenses is exactly to do a whole lot of hard work once and let everyone else reuse it countless times.

The point of a stupid new license that pointedly and intentionally does not do any of that hard work is there is no point at all to it. It's just a stupid idea, and as such, it's probably no great loss that other people can not use code from an author who chose such a stupid license.


It's not about "some organizations being unable to comply", it's that it is poorly written and thus incompatible with the legal systems of countries. But screw everyone outside of the US I guess


But the article makes a good point: unlicensed means the opposite.

Imagine two people from a corporation having this conversation:

'Can we use this?'

'Yes, it is unlicensed'

'So we cannot, we don't license it'

'No, we can'

'But you said...'

and so on...


In the License section I write this text - "Public domain, see the LICENCE file.", and in the text of the license itself the word "unlicense" is also nowhere used.


Then I would not be able to use your code no matter my intentions no matter how well meaning you are.

In my country you cannot dedicate something to the public domain (it happens automatically 70 years after your death).

The Unlicense would most likely allow you or your heirs to come after me and sue me for copyright infringement for many decades.

If you (pretty much) only want US citizens (or naive people) to use your code that's fine. However given that you choose this license it seems like this wasn't your intention..


"You can't dedicate something to the public domain in my country" - that's the whole point of Unlicense. Yes, I can do it, in my country, in your country, in any other country, and even on Mars. And I don't need an approval of any authority to be able to do it. The problem is not that I can't do it, the problem is that countries refuse to acknowledge that fact.


How does the Unlicense most accurately reflects your intentions? The Unlicense has three different components: grants (paragraph 2), no warranty clause (paragraph 4) and the dedication to the public domain (paragraphs 1 and 3). Everything but the PD dedication is common to other PD-equivalent licenses so I believe you have two intentions:

1. You want the PD dedication whenever it works.

The dedication clause is unfortunately most problematic in that, for example, it never works in jurisdictions where copyright laws are recognized but no actual dedication to the public domain is possible. The complexity of CC0 solely exists to make it effectively PD-equivalent even in such cases.

2. You don't like "lawyer speaks" and prefer shorter licenses.

Okay, unfortunately CC0 is bulky and while legally absurd I can somehow relate to that line of thought. But does that mean the PD dedication clause should exist in the license itself? No! You can easily make a PD-like license by writing your own dedication plus very permissive and short license like zero-clause BSD. In this way your intention to the dedication remains explicit (or even stronger) and you can pick legally safer licenses. Indeed this is my preferred method for the dedication [1].

Also remember, the actual SQLite "license" [2] from which the Unlicense claims to be inspired is not the license. It is just a dedication and words of blessing. The actual license, in case the dedication doesn't work, is available for purchase elsewhere [3]. The Unlicense authors are seemingly ignorant of this fact.

[1] See https://github.com/lifthrasiir/rust-strconv/blob/master/LICE... for the example. (It eventually got into the Rust standard library, hence weird triple licensing.)

[2] https://www.sqlite.org/cgi/src/file?name=LICENSE.md&ci=trunk

[3] https://www.sqlite.org/purchase/license


Now I regret a little bit that I have switched from linux to a macbook


For me it looks like all of those banking / swift sanctions will mostly affect regular russian people and small businesses, who will not be able to work with foreign markets. Russian goverment will continue to trade oil and gas like nothing happenned, and will find way to trade other goods through non-Western countries like China. And sanctions regarding high-tech export will take many many months to have an effect. Honestly, it looks like western countries try to decieve people, that they are actively helping to solve this situation, but actually want to leave everything as it is.


Part of the power of sanctions are to weaken a leader's power over their people. If they lose the mandate to lead, the people may revolt and decapitate the leadership causing the issue.


It is true that that is the logic, but there are so many modern examples to show it is a flawed logic (Cuba, Iraq before the 2nd war, Venezuela, etc.). I do wish that policy makers were as open about this as they were in the early years of the Cold War, that their goal is to inflict massive pain on the population and pray for unrest. Instead, nowadays it is framed as a direct punishment on the leaders or an abstract notion like the enemy nation or their supposed ideology.


These are just falsehoods as well. America opposes Cuba and Venezuela because of socalism, not because they care about the people of those countries. The reasons for Iraq should be obvious by now.


I don't know what you think is false. I agree that the US opposed Cuba, because of socialism, and in order to counter/end socialism there, it inflicted sanctions, consciously as a means to "to decrease monetary and real wages, to bring about hunger, desperation and overthrow of government.". https://history.state.gov/historicaldocuments/frus1958-60v06...

I agree that they didn't (and still don't) meaningfully care about the people of those countries. The goal of ending socialism was not to help those people (Reagan had a lot more rhetoric about fighting socialism because he cared about the people, but I think like in the Kennedy era, this was mere rhetoric). The people are merely instrumental tools.

You don't have to be pro-socialist or anti-US to believe these things, they're quite well documented.


Great.


People fear being jailed, tortured and having their lives ruined by its own government much more than any consequences of economic sanctions.


From my first-hand experience of being under sanctions, they do the opposite.

In 2015 Visa and MasterCard banned all transactions in Crimea. It hit hardest the people who were naturally the least loyal to Putin - the upper middle class, IT professionals, and everyone who relied on receiving international payments. I don't know anyone who has changed their opinion about Putin to worse as the result, but a lot of people, myself included, changed their attitude towards West to worse. And even if sanctions will result in the fall of the regime eventually, you can be sure they will be remembered as a hostile action.


Dropping bombs on poor civilians looks bad, but starving them to death with sanctions looks good. It's humane really.


Putin was well aware of all the possible sanctions when he gave the order, so he believes that he is prepared for the consequences.

And no, it doesn't actually work like that. If the West does come up with sanctions that significantly affect the quality of life in Russia, the Russian government will simply use it as fuel for its propaganda that the West is out to get Russia (to remind, that's the stated motivation for the invasion!).


It hasn't worked before with Putin but people still believe that if the pressure is strong enough he will change. He clearly doesn't care and no one is going to take power from him.


Same reason there is almost no phones with clean android - manufacturers focus on marketing and follow easiest path, instead of really caring about users needs.


Also, the 3090 costs almost like the whole macbook


the truly amazing thing is that the M1 Max Macbook Pro will have twice the transistor count of a 3090 in its SOC (not counting memory stacks). That's a laptop CPU now.

It also has hexadecimal-channel RAM. 16 channels of DDR5. Instead of putting the SOC on GDDR6 and gimping the CPU side with higher latency, they just stacked in DDR5 channels until they had enough. Absolute meme tier design, Apple just does not give a single fuck about cost.

(note that DDR5 channels are half the width of DDR4 - you get two channels per stick. But, the burst length is longer to compensate (so you get the same amount per burst), and you get higher MT/s. But either way, it's conceptually like octochannel DDR4, "but better", it's a server-class memory configuration and then they stacked it all on the package so you could put it in a laptop.)


We don't know exactly of course, but I wouldn't be surprised, if Apple doesn't even save costs with their own chips vs. buying from Intel/AMD. In any case, they wouldn't be able to reach that compute power in a laptop of that size at all without the new chips.


Hm, as far as I can tell a M1 Max Macbook pro costs like $3k+ (the config used in the link is like $4k). how do you figure that?


Easy: you're not getting a 3090 at MSRP unless you're lucky.


Best buy has been doing physical drops on a monthly basis. I got one in the last drop @ MSRP, only waited in line for 30 minutes. It's worth checking out if you are looking for a card.

I will say though if you are in a large city this doesn't work as well, the lines are longer.


They've not done these in my area, but indeed I am in a "big city," or at least the suburbs.


Just wait until miners will start buying macbooks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: