Line drawings work because they represent the centers of symmetry for surfaces and volumes. They trigger the same center-neurons that the original shape would.
A hand could appear on you retina in different sizes and orientations. According to distance, the size will grow and shrink.
But the center of symmetry will stay the same.
This goes further. There's also a center of symmetry between edges, and higher-level features as well. Our brain has no issue detecting these.
So if I'm understanding you correctly, "late adopters" including people who don't have money for mining hardware/coins or don't even own a computer, just a phone - deserve to be poor because not buying in to crypto is supporting "deception and violence"? What's the logic here?
The logic of Bitcoin maximalists who already revel in dreams that one day, they will relocate to remote islands or citadels while watching the world burn down.
Any human with a moral compass would shudder at the thought of a collapse of Western democracy, but not them: they giggle in anticipation of becoming the new ruling class or at least an exclusive society of mega-rich while those who doubted the Bitcoin religion will have to prostitute their daughters to the nouveau riche.
"Have fun staying poor" is their battle cry. In their twisted libertarian world view, people forced by hunger to toil day in, day out is not violence, but the state taxing incomes is.
Examining this case from a "similarity to employee" perspective neglects a crucial factor: the expectations between the parties at the time of contract.
It's patently obvious no Uber driver expected to be treated as an employee, as no employment contract was signed, and the Uber model is known and clear.
The court merely implements "social justice", which is nor social nither justice, destroying basic trust in society. Progressivism is cancer.
Labour law (in France at least, I do not know about the US) is based on - for the lack of a better expression - reals over feelz: what matters is the actual relationship between the driver and Uber, not whether either party felt surprise at some point.
The court merely implements "French labour laws", as it should.
A contract isn't law. I can write a piece of paper saying that you agree to work for me for $1 per hour and even if you sign it, you are still able to take legal action against me because this contract is illegal. Preventing megacorps from exploiting those with few options is obviously a net benefit.
This has nothing to do with my point. Law exists to establish peace and consensus.
Contacts are the most basic construct enabling modern life in a crowded society.
Reinterpreting contracts in a liberal, unreasonable manner destroys peace and introduces chaos.
It prevents members of society from ever trusting one another, and makes all agreements tentative, subjective to the retroactive whims of politicians and court.
It serves the court, positioning it as an ultimate dictator, not the society.
Law exists to ensure the best possibly outcome for society. Companies finding creative ways to redefine work so they get all the benefit while offloading any of the costs is not helpful for society.
I might be inclined to agree with you on contracts if it wasn't for the huge imbalance of power here. An individual who works for these taxi companies has basically no negotiating power against uber so they will never be able to come to a fair contract. Employment protections exist to solve this and they have been rightfully applied to uber like they are to every other company. Laws should adapt to changing reality rather than let companies work around them.
While it is possible to argue against it on a philosophical ground, in French law contracts are not the most basic construct, and they have to comply with higher legal norms such as law or the constitution. Hence a contact must be "reinterpreted" (i.e. is void) if it does not follow the law.
> On ne peut déroger, par des conventions particulières, aux lois qui intéressent l'ordre public et les bonnes moeurs.
> Le contrat ne peut déroger à l'ordre public ni par ses stipulations, ni par son but, que ce dernier ait été connu ou non par toutes les parties.
> Chacun est libre de contracter ou de ne pas contracter, de choisir son cocontractant et de déterminer le contenu et la forme du contrat dans les limites fixées par la loi.
Most employment law isn't about protecting people from being surprised by an employer it's about setting limits to what an employer can require of an employee. Both parties expecting at signing that you'll only get paid for half the hours you work or that you'll be paid below the required minimums doesn't make it any more legal for the business to do that.
The ability to change value by ref is crucial to most real-world apps. This is not surprising, as those apps are modeled to represent a real, dynamic world.
Functional languages tend to disregard this, and demand composition over mutation. While this works for some domains, it's a burdensome complication for others.
All the more reason to explicitly model time and not conflate it with the system runtime.
As an example, banks have always added entries to a ledger such that the time history of events is explicit and preserved. Functional programming models this very well. An imperative programmer would probably write the final balance on a post-it note in pencil so that it could be destructively updated. Some OOP programmers have actually tried to model banking like this, because their language encourages it.
Another example might be a text editor buffer. Immutable persistent data structures allow one to get many features for free such as versioning, undo, lockless autosave. Memory is cheap now, there's less of a need to throw away some my edits with destructive updates. Even filesystems and databases have realised this is good idea. Destructive updates are bad for valuable data.
Imperative programming is only better for writing low-level system code, which is full of state that ultimately no one in the real world cares about.
Your points are not 'wrong' but I think you are being overly broad in the conclusion you are drawing.
Imperative programming is the dominant programming model that 99.99% of all real world code for the last 60 years has been programmed in. It very well may be due to be replaced, but certainly cant just be hand-waved away
50 years ago 99.9% of computer programs were written in completely imperative assembly language programs, such that even arithmetic involved statements and temporary mutable variables. Fortran was a more mathematical higher-level language with boolean and arithmetic expressions. Functional programming just continues this trend, resulting in more mathematical, higher-level declarative languages, which as far as I am concerned, makes them more suitable for modelling real world business problems.
FP didn't invent immutable data, or mathematical functions. It just makes it easier to use them for some domains, while complicating others.
Imperative programs can have isolated, pure subsections of functionality which are based on composition and immutability, without having to deal with the "ideological" strictness of FP.
You're considering only a minute aspect of large systems. Any real banking system has much more than a ledger to deal with. And most of it involves huge amounts of state, which you can't just pile on top of RAM, or waste CPU on. Anything that doesn't serve a real business/user purpose is just waste.
And text editors don't need FP to support undo either. I'm not sure what point are you trying to make.
The most successful editors in the market, supporting all the features you mentioned, do quite fine with imperative, and in fact, I'm not aware of any meaningful editor written in a FL.
> Memory is cheap now
Not really. This comment strikes me as if it comes from someone who never dealt with significant amount of data.
FP can be extremely wasteful; take head-tail lists for example, which are the default container in many FPs. They waste object-container data, pointer data, and scatter the list items, causing cpu cache misses.
Claiming that imperative is only better for low-level is extremely disconnected from reality. Try writing a game with FP, and I assure you no one will care for code "beauty" when FPS sucks.
Sometimes, arguing with FP purists feels like debating socialists.
Also: go ahead and downvote me, as apparently FP zealots can't tolerate questioning their religion.
> Imperative programs can have isolated, pure subsections of functionality which are based on composition and immutability.
And functional programming languages can support imperative programming and mutation, even Haskell. It's a question of what should be the default, which depends on the domain you are working in.
> Any real banking system has much more than a ledger to deal with.
Most banking systems are cobbled together from third-party components and libraries. Low-level languages aren't really needed. Many banks use Python, which is orders of magnitude slower than Haskell.
> This comment strikes me as if it comes from someone who never dealt with significant amount of data.
Actually I have a background in distributed computing, where functional programming abstractions have made processing large amounts of data much easier (MapReduce, Spark).
> FP can be extremely wasteful; take head-tail lists for example, which are the default container in many FPs.
This is a strawman. I can write inefficient code in any language. C++ code can be littered with pointer indirection and memory barriers.
> Sometimes, arguing with FP purists feels like debating socialists.
You'd be more convincing providing a substantial response instead of referring me to "investigate". I've used FP extensively, and they all make mutation very hard, by design.
Do you disagree with this premise? Do you disagree with my claim that the world is dynamic, and that FPs are limited in representing them?
Please address my points specifically if you have anything meaningful to contribute.
I would be more convincing, but the whole point of my comment was that you need to do the work to get up to date with where the debate is. Don't ask other people to do the work for you.
Anyway, FYI that's why you've been multiply downvoted. Not because there's some FP cabal who downvote pro-imperative comments.
Didn't expect otherwise. You'd use a logical response if you had one.
My comment is a direct response to the OP who implied FPs are going to "take over".
I'm being downvoted probably by FP zealots who can't deal with the idea that their solutions aren't perfect or universal.
BTW, I'm not "against" FP in general at all. I have used it in the past, and it has useful idioms to contribute to imperative languages, and it's perhaps useful on its own in some domains; I'm just pointing out its limitations.
D & Nim (especially Nim) are very appealing from a language perspective, but the lack an IDE undermines the advantages they bring.
I personally won't consider using a language for a meaningful project unless it has a decent debugger, and reasonable refactoring; ie. at least find references & rename.
Beef is building on an IDE, which is a refreshing take, but it's still too early.
Using that .spacemacs file you should be able to launch a fully prepped for D! The only other dependencies are a D compiler and DCD (https://github.com/dlang-community/DCD) built and in your PATH. I'd be happy to answer any questions if something doesn't work.
Other IDEs with a languageserver implementation can make use of DLS https://github.com/d-language-server/dls. VSCode and sublime are two I've had good success with.. but I always end up back with emacs ;)
As for a debugger, you can debug any D program with GDB/LLDB (depending on if you compile with dmd/lcd). Then you can use any graphical debugger that uses gdb. Nemiver is my favorite, then there's a few web based ones that are nice: https://www.gdbgui.com/
Also had really good luck doing D with Spacemacs, but Spacemacs is a hit or miss to setup for me. Other times I just use Sublime Text for the syntax highlighting.
I can't remember what I used for Nim, I think I either used Nano or vim just because I've only tested it once or twice. I havent used it heavily yet, though I do have an interest in Nim, since I do enjoy Python programming.
My own pet peeve about D is the lack of local documentation. If I install my system's compiler packages¹ I receive basically no documentation at all. That is a huge difference to the experience with nim/rustc/go/$others which all provide a nice fat $lang-doc package in their Suggests field, and all of those contain full standard library docs/tutorials/etc.
1. gdc or ldc on Debian. The experience /may/ be different with the dmd repositories, but being external they're of less interest to me.
D comes with an interesting utility called `dman`. Run it from the command line and as an argument give it a D topic and it'll open a browser on the topic.
dman dman
will, of course, open the browser on a page about dman.
Which, as I noted, is still a very different experience from go/nim/rust in Debian. It wasn't a pet peeve about no documentation, it is about how I can interact with it compared to to other languages.
Which leads me to wonder why we don't have dmd{,-doc} in Debian now, given it has been ~3 years since the compiler was open sourced. I can't even find an ITP in a quick search :/
Nim works very well with Emacs and Visual Studio Code at the very least, also Vim (from reports by others) - including debug support. What are you missing?
it should work on nim source level as well: if there are any problems ,that's a bug,so please report it. (disclaimer: me & my partner also work on a bit more advanced commercial debugger environment which also targets nim, so I guess there are many kinds of efforts)
VisualD is pretty damn good, visual code has a few D plugins that are ok to excellent when they work (which is most of the time)
There is also a D specific IDE which I can't remember the name of which is probably best if you just want something that works (it's pretty lightweight)
Where Mindustry falls short for me is where once the waves are over, you're forced to leave your base and start all over.
I'm not motivated to invest in optimization if the production line is going to be abandoned soon; the game becomes more of a tower defense, and less of a production one.
Play custom game, survival mode and see how many waves you can survive. I think the campaign is more of a tutorial thing and to have a mode with a sense of progression.
Have you tried multiplayer? I think it would make optimization more important, as I agree in single player it's a bit too easy to cheat the system with sloppy production lines by just adding hella barricades and getting the best guns.
It is a disease and survival has dramatically increased because of amazing progress in treatments. Just look at the recent drop in cancer mortality in the US.
It is incorrect and irresponsible to blame people who get cancer for eating badly etc. Sometimes things go wrong at a cellular level and cancer happens.
Did you read the article? It states the drop in cancer mortality is largly due to the fall of smoking.
'A 2006 analysis concluded that “without reductions in smoking, there would have been virtually no reduction in overall cancer mortality in either men or women since the early 1990s.'
There's no real drop, but number manipulation. They just label you as "healed" if the tumor doesn't return after a some short period of time, which gets shorter by the year.
Actually mortality rates are at all times high, globally.
Are you referring to the shift away from using overall survival to progression free survival? If so I agree that this rubbed me the wrong way as well.
However if one looks there has been tremendous progress in some cancers. As an example CML went from a swift death sentence to something approaching a chronic condition.
That’s because everyone dies! The article avoids mentioning that cancer rates have risen as other causes, particularly infections, have declined over many decades. The average lifespan has increased globally. In developed countries this is largely due to improvements in cardiovascular drugs/treatment and cancer drugs.
The standard survival rates are quoted as 5 year survival unless the cancer is particularly aggressive eg pancreatic. The statistics around cancer are very high quality and public in most countries. However, I suspect data will not influence your view.
The entire medical field is based on opinion and wizardry. We're sicker and weaker than ever.
I've healed myself from various conditions by going carnivore.
Full disclosure: I have not read the article yet but gathered a sense of it from the comments here.
I like your citations. They make me think about a recent radio interview in Canada where an MD was discussing cervical Cancer. He would not say that vaccination for HPV prevented Cervical Cancer. He would only say that it prevented pre-cancerous lesions and Cervical cancer "is never seen without pre-cancerous lesions".
I can understand using scientific rigour to avoid jumping to conclusions. However it makes me wonder about Louis Pasteur and the discovery of micro-bacterial disease origins. Would he have said: "I can't say that these bacteria causes disease X but we have never seen disease X without the presence of these bacteria." ?
All that to say perhaps medical research has been going down the wrong path with Cancer. We have one clear case of viral "involvement". :-)
How big a stretch is it to extend research in that direction and look for more cases?
But not all cancers are caused by virus. Probably some of the multiple toxic compounds in the smoke of tobacco is the most known case. Radiation is also a well known cause. Even more info https://en.wikipedia.org/wiki/Carcinogenesis
From your wikipedia link, this technique sounds like a great way to find more cancer causing viruses.
"...developed a new method to identify cancer viruses based on computer subtraction of human sequences from a tumor transcriptome, called digital transcriptome subtraction (DTS)"
It's from 2008. There has been enormous improvements in DNA technology since that time.
While the rewrite enabled faster development of the language as a whole, it also gradually destroyed the IDE's performance. The editor in VS 2019 is simply unworkable.
I blame this directly on the immutable AST. While a nice concept in theory, it causes too many allocations, and is cumbersome to work with.
I had to ditch ReSharper to get to 2019 because together with the performance of the IDE itself, it just wasn't usable. R# ate gigs of Ram and Roslyn does the same. It's not surprising since they basically do the same thing, in managed code! But I can't pay the CPU time and memory to analyze my code TWICE on every edit. I also suspect things like switching build configuration offers thousands of opportunities to have some IDE widget hold references to old compilation data structures which are never garbage collected. On the bright side at least 2019 works pretty well without R#.
> I had to ditch ReSharper to get to 2019 because together with the performance of the IDE itself, it just wasn't usable. R# ate gigs of Ram and Roslyn does the same. It's not surprising since they basically do the same thing, in managed code! But I can't pay the CPU time and memory to analyze my code TWICE on every edit.
I'm holding off on 2019 as much as I can. Between 'forcing' an upgrade for .NET Core 3.0 and the fact Resharper slows it down too much, I decided to give Rider a try.
I'm finding myself not missing VS a whole lot; on one hand Rider is taking way more RAM to start and load, but it stays pretty constant after the first debug session, winds up staying under VS for memory on longer loads (Especially if I've got multiple solutions open) and it's smoother than VS the whole time.
It is, but that's not too relevant, as the ReSharper component runs in another process (but that's managed code as well, so probably also runs as a 64-bit process).
I'm not using ReSharper, and never have. The editor simply gets slower every version, and now it's come to the point where I seriously consider downgrading.
I did the exact opposite: I migrated to Rider, an IDE built around ReSharper. In my experience, it is much faster than VS2019 for my use case (including a huge monolith with hundreds of projects)
I highly suggest giving Rider (also by JetBrains) a try. They run most things in separate processes/threads instead of running everything in the UI thread like VS does. It's almost a drop-in replacement.
NB: It's not VS that's forcing ReSharper to run in the same process. It's JetBrains refusing to listen to the VS team on how to build VS extensions that need plenty of resources since 2008 ... (they're finally considering it as of summer 2019, I think).
> I blame this directly on the immutable AST. While a nice concept in theory, it causes too many allocations, and is cumbersome to work with.
This isn't a cause of performance problems you're seeing. What's most likely happening is that the overall size of your code has increased a lot over time, causing performance issues due to issues that have existed for a long time, but weren't being felt yet.
However, immutable vs. mutable isn't related to the issue I linked. It's just about keeping more data around than is (perhaps) necessary. You'd see the same issue with a mutable AST. If you're curious about specific work that's being tracked you can use this label: https://github.com/dotnet/roslyn/issues?q=is%3Aopen+is%3Aiss...
And if you submit reports via the VS Report a Problem tool, with the option to collect a diagnostic trace, you'll generate exactly the data needed to fix issues that you're facing. The team is very keen on addressing performance problems, especially if there's diagnostic data that can pinpoint the source of a problem.
> the overall size of your code has increased a lot over time, causing performance issues due to issues that have existed for a long time, but weren't being felt yet.
Not really. A simple empty project displays the same problems. You can try going back to 2017 right now with any project you're working on, you'll feel the difference instantly.
Intellisense simply takes longer to respond, and likewqise other editor functions.
Their feedback forums have hundreds of similar reports.
Interesting. I've observed the opposite, with VS 2017 often being unbearably slow in comparison as the codebase gets large. But I can appreciate that you may be experiencing the opposite. I highly recommend filing issues, especially if you can do so with a specific, reproducible problems. Those tend to get resolved quite quickly.
The issues are getting resolved in the public bug trucker, pretty quickly indeed, typically saying "can't reproduce, won't fix", sometimes "not a bug, won't fix".
I'm not sure the software problems are getting resolved.
So there are a _lot_ of legitimate problems being fixed and enhancements being added over pretty short periods of time.
When something is resolved as "no reproduction" or "not a bug", that's because there was an earnest attempt to reproduce an issue with the latest bits set to go out to a release with no reproduction, or something is truly by design (e.g., user files an issue because they would prefer a feature to do something different than it does today).
That is interesting and counter to my experience. Consider looking in to alternative causes (uninstalling plugins that may not be playing nicely with your particular version, background updates, etc.)
The problem with immutability in a rapidly mutating environment is that the theory clashes with reality.
Anytime a leaf node changes, all its ancestors have to be replaced, instead of just updating the leaf in place. (I'm aware of the red-black node separation, but I believe that in practice most of the tree is constantly regenerated all the time).
I realized it when trying to write a complex analyzer. I had to replace the tree all the way up to the project level.
If you combine different chunks of the tree, each with a slight change, you're forced to recreate each of those chunks.
This is extremely wasteful, and no wonder the IDE behaves so poorly.
Also, forgot to mention, that in some analyzers, even if you have no actual code change, the symbols change meaning, and then you're forced to recreate the tree regardless, because you can't change the node-symbol association.
I found that VS 2019 is faster than VS 2017.
But VS 2019 with Resharper is a lot more slower than VS 2017 with resharper.
I would blame resharper there, not VS.
VS never felt that fast than right now.
The biggest reason IMO not to use ReSharper is actually Rider (so I don't use VS anymore either).
You get all the ReSharper goodies, but inside an editor that's more nimble than vanilla VS.
I just don't use ReSharper anymore and almost don't miss it.
I tried Rider but don't use it anymore.
I don't like the Intellij/Rider UI, defaults shortcuts are horrible, basic actions are hiding in submenu.
In my opinion, Visual Studio UX is not great, but far better than JetBrains products.
If I learned anything from my Borland days was to only get my IDE tooling from the same factory that does the sausage, instead of always playing catch-up with the OS vendor tooling.
Visual Studio is 32-bit and limited to about 3.5GB of RAM. The increasing functionality and solution sizes create bottlenecks in the processing.
This is a fundamental problem that for various (outdated and bad) reasons the team hasn't fixed. They've been refactoring components to run in separate processes but it's slow progress and still won't solve the main thread running out of memory anyway.
OmniSharp + VSCode uses Roslyn as well, so there's no rewrite going on here. But because VSCode is very different to VS - namely that it's a process host where language services run in entirely separate processes. In VS, things are a bit more complicated since lots of things run in the IDE process, but in the case of Roslyn, there are other processes spun up to run specific things.
A hand could appear on you retina in different sizes and orientations. According to distance, the size will grow and shrink. But the center of symmetry will stay the same.
This goes further. There's also a center of symmetry between edges, and higher-level features as well. Our brain has no issue detecting these.