As frustrating as I imagine it is to be in the position of having to figure out how to teach students things that seem like basics they should be aware of, I'd also argue that this issue actually stems from two circumstances that are worth this cost: the filesystem abstraction is increasingly unnecessary for non-technical people, and incoming students who want to study computer science aren't required to already have technical knowledge beyond that of their non-CS peers. The alternatives to where we're currently at would be either regressing the usability of interfaces for the overwhelming majority of people or taking away access to computer science education to people who didn't or couldn't go out of their way to learn topics that only matter to technical people in advance.
The growing irrelevance of the filesystem for the average computer user isn't a ln "epidemic" any more than the obsolescence of the CLI for the average person; it's just additional progress towards adapting computers to be more human-friendly rather than the reverse. We didn't used to have cars with automatic transmissions, cruise control, or blinkers that turned off on their own once you've finished turning, and no one describes the evolution evolution of the way we drove cars from decades ago to the present in the same language as a public health emergency.
Over time, technology becoming simpler to use by parts of the interface getting pushed down into implementation details is a good thing for the vast majority of people in the long run, and it's important for those of us who are technical not to mistake the requirement of a certain feature for the ability to access it. I think the biggest concern with the dominance mobile computing isn't that users might not need to know about the filesystem but that users might not have control of their own devices in the long run if the ability to access the filesystem is removed. There's precedent in getting support from the non-technical public to care about technical details when they understand how it affects them (e.g. right to repair, the pushback against SOPA/PIPA, net neutrality), but I that we'll miss the window to influence things similarly here if we focus on the wrong thing.
The readme seems to indicate that it expects pytorch alongside several other Python dependencies in a requirements.txt file (which is the only place I can find any form of the word "dependency" on the page). I'm very confused by the characterization in the title here given that it doesn't seem to be claimed at all by the project itself (which simple has the subtitle "Minimal LLM inference in Rust").
From the git history, it looks like the username of the person who posted this here is someone who has contributed to the project but isn't the primary author. If they could elaborate on what exactly they mean by saying this has "zero dependencies", that might be helpful.
> The readme seems to indicate that it expects pytorch alongside several other Python dependencies in a requirements.txt file
That's only if you want to convert the model yourself, you don't need that if you use the converted weights on the author's huggingface page (in “prepared-models” table of the README).
> From the git history, it looks like the username of the person who posted this here is someone who has contributed to the project but isn't the primary author.
Yup that's correct, so far I've only authored the dioxus GUI app.
> If they could elaborate on what exactly they mean by saying this has "zero dependencies", that might be helpful.
I've seen issues in Go codebases a couple times where a _lot_ of effort has been spend trying to track down allocations and optimize memory usage. It sounds like the parent comment is describing writing new code striving to avoid allocations, which probably isn't something that Go is that much harder for than similar languages, but I feel like it's one of the more ambiguous languages in terms of the amount of context needed to be able to identify if a given variable is allocated on the heap or not. A pointer might be a pointer to the stack, or a pointer to the heap, or a pointer to an interface that might _also_ have a heap allocation on the concrete-typed value behind that. If you see a slice, it might be a heap allocation, or it might be a reference to a static array, or it might be a reference to another slice...which has the same possibility to be either a heap allocation, a reference to a static array, or just be another link in the chain to a different slice.
This is a place where I feel like the type of simplicity that Go touts doesn't actually feel like it's optimizing for the right thing. Having a single type for all pointers certainly has a sort of abstract simplicity to it, but I feel like it doesn't actually make things simpler when using it in the long run. My perspective is that "conceptual" simplicity is a balancing act between not having too many concepts but also not having concepts being too confusing individually, and I'm surprised that Go is used for domains like needing to completely avoid allocations in a hot path when the language doesn't really feel like it's designed to make easy.
I don't disagree, but for whatever reason I've continued to see projects using Go in places where using a large amount of becomes a bottleneck. The best rationale I can come up with is that people measure whether it's a reasonable choice based on the theoretical floor of memory usage that might be possible to reach by writing perfectly optimal code rather than the amount of memory that would be expected from the code that will actually be written and then underestimate the amount of work needed to optimize the former into the latter.
there is a myriad of reasons. usually it is because the rest of the project is in Go, so adding new language just increases complexity. Go is fast to prototype as well. So in general there is nothing wrong with it but once these questions are start popping up(memory and whatnot), it is perfect time to appropriately refactor the code, likely into a different language that can better utilize the hardware for the performance needs of the application itself.
In my calculus class in high school, one of the problems in the set at the end of the chapter about the rate of the growth of kudzu. None of us had heard of it (including the teacher), which I guess might be due to being in New England rather than somewhere it's more of a problem. I think I remember us thinking it was some sort of crop rather than a weed, so we were all very surprised at the super high rate of growth it used in the problem.
> The article doesn't mention it, but am I right in assuming this basically comes from McDonald's? There are a lot of places around the world that copy the "'s" where it doesn't exist natively, but only for restaurant names or similar -- like "Bob's" is the McDonald's clone in Brazil [1].
For whatever reason, it drives me crazy when I hear people refer to Pizzeria Uno as "Uno's". I've had conversations about it multiple times with different people in my family. There's no one named "Uno", it's a number! I try not to be a prescriptivist but for whatever reason this bothers me to an irrational degree, and I can't understand why nobody else notices.
This sounds very Midwestern to me. Where I come from that would happen a lot. It wasn't necessarily that people didn't know the real name of the place. It functioned more like an inflection that helps to distinguish between the company, and a specific storefront operated by that company. Compare it to the distinction between "Alice" and "Alice's". Alice is the person, and Alice's is her house.
For example, you you'd say "JCPenney stock is up by 32 cents this week," but you'd also say, "I bought this shirt at Penney's."
I come from Michigan, and have found that the two fastest ways people identify me are 1) calling fizzy soft drinks "pop" and 2) that I add an "s", e.g. King Sooper's or Meijer's.
Is this not restricted to company names deriving from personal names (or words that are perceived as such), though? For example, would you say “I bought this shirt at Target’s”?
In general, yeah, but this is sneaking into the area of language where it's all ad-hoc conventions and there aren't actually any reliable rules. I suspect, for example, that there's a sandhi component that helps predict which store names do and do not get the S added.
I don't think there's a particular rule that a number can't act as a name like 007's movies. Or that the thing possessing has to be a person, eg. England's weather.
I'd argue this is less akin to calling Casino Royale "007's movie" but rather if you referred to the sequel to the Godfather as "Part II's movie". I suppose this is a bit of a philosophical question of whether the index of something is the same as the thing itself, but I think I fall in the camp that things exist distinct from the way we identify them, and names are just labels rather than some inherent part of the thing itself.
> calling it Uno's isn't inconsistent with how we talk about Walmart's stores
calling it Uno's isn't inconsistent with how we talk about Walmart's stores or Google's website
No, calling it "Uno Corp's pizzeria" would be the equivalent. Nobody says they're "Going down to Walmart's" or "doing some research on Google's."
Penney's was founded by James Cash Penney. The store was his store, that is, it was Penney's store. I think the omission of the apostrophe was kind of artistic license, but I'm only addressing the silliness of adding the "S," with or without an apostrophe, to something that isn't a person's name.
My family always used "Penny's" to refer to JC Penny. They also continued to refer to Macy's as Dayton's for years after they had changed their name because the locations were all the same, just the name had changed.
Its funny because I too always felt saying "Penny's" was a regional thing, but more of Midwestern thing.
I can understand Penney's or Dayton's since those were people who founded eponymous stores. I suppose we have our answer -- people established a habit of the "S" when that type of naming was so common, and extended it instinctively to all stores even though there was never a Mr. Kmart or a Mr. Circuit City.
Fun tangent: I learned pretty recently that the southern California grocery chain is named after a man with the last name "Ralphs," so it never had an apostrophe and indeed shouldn't have one (in any language).
I'm from northern Ohio (Cleveland area) and it's only reading this thread that I'm learning/realizing that the name "JCPenny" isn't plural or possessive. My family always called it "Penny's" too.
The Australian English thing to do is to drop the apostrophe, use an optional creative contraction to make the phrase even shorter, and thereby turn the entire thing into a noun :)
I.e. Maccas vs McDonald's
Of course, the official website https://mcdonalds.com.au/about-maccas/maccas-story uses an apostrophe which is now making me have the same reaction as the Germans :( and makes me think it was run through some international filter :p outrageous!
> And the domain the company uses is "unos.com", so at the corporate entity has accepted the name.
Yeah, I've heard servers there say "welcome to Uno's", so I know I've already lost the battle. Like I said, it's not a rational annoyance though, so that doesn't make me feel any better when I hear it.
To be honest, even when going to the original Pizzeria Uno (or Due) I’ll probably still call it “Uno’s” ‘cause it’s a weird part of the Chicago dialect. We do the same thing for the grocery store Jewel-Osco, calling it “da Jewels”
Fellow Chicagoan here. It's funny you say that. My wife calls Jewel-Osco "Jewels" lol. I am just starting to realize that not everyone talks this way haha.
If the issue is that people are dying leaving behind significant wealth but not documenting this, just make the estate tax 100% on any assets missing documentation like this. I'm sure the lawyers would figure out the rest.
The issue is that the precedent they point to was categorically ruled _not_ an illegal monopoly in a similar court case. I don't disagree that there should be more competition in for platforms, but I also can recognize that the legally binding opinion on that disagrees with mine.
According to https://en.wikipedia.org/wiki/Epic_Games_v._Apple, the ruling definitively stated that Apple does not need to add third party app stores to its platform, and the appeal upheld that ruling (with the Supreme Court declining to hear further appeals, meaning the case is finished). The only change Apple had to make is supporting third-party payment platforms.
> Judge Rogers issued her first ruling on September 10, 2021, which was considered a split decision by law professor Mark Lemley.[63] Rogers found in favor of Apple on nine of ten counts brought up against them in the case, including Epic's charges related to Apple's 30% revenue cut and Apple's prohibition against third-party marketplaces on the iOS environment.[64] Rogers did rule against Apple on the final charge related to anti-steering provisions, and issued a permanent injunction that, in 90 days from the ruling, blocked Apple from preventing developers from linking app users to other storefronts from within apps to complete purchases or from collecting information within an app, such as an email, to notify users of these storefronts.
> ...
> The Ninth Circuit issued its opinion on April 24, 2023. The three judge panel all agreed that the lower court ruling should be upheld. However, the Ninth Circuit agreed to stay the injunction requiring Apple to offer third-party payment options in July 2023, allowing time for Apple to submit its appeal to the Supreme Court.[79] Both Apple and Epic Games have appealed this decision to the Supreme Court in July 2023.[80][81] Justice Elena Kagan declined Epic's emergency request to lift the Ninth Circuit's stay in August 2023.[82]
> On January 16, 2024, the Supreme Court declined to hear the appeals from Apple and Epic in the case.
Given that the claim I was responding to implied that it was foolish of Google to cite Apple due to them being a monopoly, can you elaborate on why you think this ruling somehow was an obviously bad idea for them to argue as a precedent? To repeat myself from before, I'm _not_ expressing personal opinion about whether iOS and Android should be allowed to operate the way they do, but asserting that the court ruling does in fact state that the current way Apple handles third-party app stores is legal.
> What people fall in love with is the rigorous static typing, the Option monad, the exhaustive enums (which are just sum types in disguise), the traits (type classes in disguise), the borrow checker (a half-way house to immutability) etc.
I feel like I say this every time this sort of discussion comes up, but I still think that there's a space for a higher-level language with most of what people like from Rust that has a (tracing) garbage collector and is slightly more relaxed with trying to design a way around every marginal heap allocation. Most of the time I bring this up, someone will suggest something like Swift or OCaml, but I think the part people miss is how even despite all of the complexity that comes with being a systems programming language, Rust really goes out of its way to try to be developer friendly despite that.
Yes, it's a meme that Rust programmers are zealous evangelists and want to rewrite the world in it (which is a bit of an exaggeration in terms of lumping all Rust enthusiasts into that group, but there's certainly an element of truth in it), but no one seems to talk about how _weird_ of an idea it is for a language notorious for having a terrible learning curve to be so popular with people perceived to be lacking real-world experience with the domain. How did a language that's supposed to be so hard get popular to the point where people view its fans as pushing it aggressively? You might chalk some of it up to marketing, but I think people undervalue how much effort is put into the little details in Rust to try to make it more approachable, like error messages, standard library documentation, first-class support for all major platforms, and high-quality introductory materials (e.g. both the original and rewritten The Rust Programming Language book, Rust by Example, Rustlings). I don't think the same experience is there if you want to use Swift on Linux (where the support isn't nearly as strong, and a lot of the fancy new things coming out won't be available) or OCaml (from googling right now, a debate on reddit about "which stdlib should I use as a beginner" is on the first page of results when I search "ocaml std" or "ocaml stdlib").
One issue is that with GC, you lose prompt finalization of resources. A lot of code is written with this assumption in mind (e.g., file buffers are flushed if the last reference to the file is dropped—which is arguable incorrect due to the lack of error checking). And the borrow checker is the only thing that keeps everything from being mutable in place in Rust today. Having GC would open the possibility for alternatives to the borrow checker without compromising memory safety, but even Pony-style reference capabilities probably won't lead to a language where abstractions compose much more easily than in Rust today.
Maybe a language with a similar syntax, traits, monomorphization, and macros would still be interesting to many people? Would people prefer traits and macros over ad-hoc polymorphism (in the style of C++, which could subsume the macro use cases, too)?
I don't think constructors are the challenging part. It's about lexically scoped destruction. Certainly there are languages that have that and permit garbage collection, however those constructed values are necessarily second-class citizens and behave somewhat differently than ordinary values. There's probably some reasonable middle-ground, like constructors returning an owned reference that explicitly can be turned into an unowned reference, thus opting out of deterministic destruction.
> but I still think that there's a space for a higher-level language with most of what people like from Rust that has a (tracing) garbage collector and is slightly more relaxed with trying to design a way around every marginal heap allocation
I dream about a Rust subset that's exactly that. What would be even better is if you could just use Rust packages directly with it. Since these libraries already do compile, correctness has been verified.
A web framework doesn't need GC, it just needs some ability to express the idea that per-request code should get its own allocator, with knowledge of said allocator propagating down through function calls.
Jai solves this by having a "context", which includes an "allocator" member by default, whose default value is a standard heap allocator. You can override this allocator member to point to your own allocator, so it's easy and straightforward to have the main server procedure allocate a pool of memory, create a new context with a new allocator that uses this pool of memory, then "push" this context for the duration of the request resolution, then free (or mark for reuse) the memory pool afterward.
I gladly take the whole rust "package" because overall it's just so good, at least for opensource / hobby stuff. I wish there was an equivalent but with GC, to use for work.
> How did a language that's supposed to be so hard get popular to the point where people view its fans as pushing it aggressively?
I think Rust is especially popular with a demographic that was not previously exposed to lower level programming. Meaning younger programmers (because modern languages are higher level) and web centric programmers (because we're in a boom of web development).
This demographic had a hard time entering systems programming, because while fascinating, its exposure is lower (less jobs, less projects), and entry cost (learning C, or C++<11) is harder.
Rust made systems programming more accessible, and systems programming, just as anything related to how things work under the hood, is fascinating.
Now Rust is hard, as you noted, but not hard in the same sense than C is hard.
C is hard because low level stuff bites you immediately. You can't quite easily just use a random library in C if you don't understand your build system, linkage, etc. If you mess up in C, the compiler will not tell you either, you will have to debug your way out of it.
Rust is different, in the sense that its difficulty is limited to the compiler saying "nope". If you can get the compiler to say "yep", then you're 99.9% of the time safe. Now getting that compiler to say "yep" may take time, indeed, but all in all you most often can just get away with sprinkling unwraps, clones and Arcs all over the place until it works.
In that sense, I think Rusts popularity essentially lies in the fact that it is a way to do low level stuff with a barrier of entry limited to being stubborn in learning it.
Having learnt C in high school, and doing Z80 Assembly prior as a 12 year old kid, only with a bunch of books from the local library, it is kind of interesting to read about fear and hard regarding C.
Yes it does corrupt memory, there are some crashes, I usually bash C, nonetheless it seems we have too much "safety playground" regarding learning processes nowadays.
Scala 3? It has the vast Java ecosystem available and state of the art GCs (plural), with either a focus on throughput or low-latency. Also, it can exclude the null value from the type system, marking it explicitly with a union type.
I think all NVM langs claim interoperability with Java ecosystem. The situation on the ground is never as nice as sold in my XP.
Scala is a great example where I've seen the "best" option being rewrapped libs with scala calls and types rather than a native solution and the dev XP just isn't as good overall.
You can hardly get higher quality code than what is generally available in Java, especially with their stability. Wrapping it to use the language conventions seems like a pretty solid choice to me.
The dev xp, I sorta agree on, but it has improved a lot.
> How did a language that's supposed to be so hard get popular to the point where people view its fans as pushing it aggressively?
Popular languages don't really have evangelism or fans pushing it aggressively. Those are traits of smaller languages that don't interop well with other ecosystems so they need a lot of evangelism to build out the library ecosystem.
> there's a space for a higher-level language with most of what people like from Rust that has a (tracing) garbage collector
What non-memory management related things is it people like from Rust that is missing from, say, Java or Kotlin? Because those have great web frameworks that address all the features requested in the article and a whole lot more, there's good first class support for all major platforms, lots of documentation etc. These languages are also famously developer friendly with good error messages.
> Those are traits of smaller languages that don't interop well with other ecosystems so they need a lot of evangelism to build out the library ecosystem
> What non-memory management related things is it people like from Rust that is missing from, say, Java or Kotlin?
I'd argue that Rust has better interop with C, C++, JavaScript, Python, Ruby, and probably almost every other non-JVM language than Java and Kotlin. I'm not sure why you think that getting people to write more libraries is the goal of evangelization; if anything, I think Rust is somewhat notorious for people writing lots of libraries but comparatively fewer full applications.
Independent of interop (which I'm not really sure is as important to understanding why languages are or aren't popular as you seem to imply it is), I don't think the tooling in Java is nearly as beginner friendly as Rust. It's not just about the code itself; handling dependencies and building an application that you can run outside of an IDE are not nearly as straightforward in Java as plenty of other languages nowadays.
My point isn't that Java is bad or that doing things in it is hard in the absolute sense, but that "it's possible to do this in language X" is not the same as "it would be easy for a beginner to figure out how to do this in language X". I think there's an inherent lossiness in trying to distill what people like in a programming language into a bullet-pointed list of features, and it's an easy trap to compare those lists and conclude that one language doesn't have anything novel to offer based on that rather than the entire experience of working in a language.
Does Rust really have better interop? At minimum you're going to have to think about the gap between manual lifetimes and GCd allocations, bindings generation, ensuring the target language runtime is installed and so on.
You can call into JS, Python, Ruby and other such languages from Java like this:
It's very easy and requires no build-time bindings generation or Python/JS/Ruby runtimes to be installed. You can add Pip dependencies via the Java build system you use, as if they are regular libraries. It will also JIT compile the different languages together as one so the boundaries are optimized, you get transparent exceptions, callbacks work transparently as the whole heap is GCd as one, you can using a debugger in a unified way across languages and so on.
But this is sort of beside the point. Java once had lots of evangelism, partly to help build out the library ecosystem, but that was done years ago and now there are lots of libraries to meet most needs. So as a consequence you don't hear about it as much. This thread is a case in point. Lots of people suggesting rarely used languages like O'Caml or Zig, nearly nobody suggesting more obvious candidates that are used for this task, every day by nearly every big company in the world.
> I'm not sure why you think that getting people to write more libraries is the goal of evangelization; if anything, I think Rust is somewhat notorious for people writing lots of libraries but comparatively fewer full applications.
Wouldn't that be expected then? Rust has had lots of evangelism which has successfully yielded lots of libraries?
> handling dependencies and building an application that you can run outside of an IDE are not nearly as straightforward in Java as plenty of other languages nowadays.
I think this may be based on an outdated idea of how things work nowadays. I have my beefs with Java build tools but if you just want to build and distribute a web app it's easy. Using the stack I'm most familiar with:
1. Starting from a blank computer, install IntelliJ or other IDE of your choice.
2. Go to https://micronaut.io/launch and use the little wizard to pick whatever languages and features you want to start with.
3. Download the generated project, open it in your IDE. All the dependencies are now downloaded automatically including the build system and the Java runtime itself. Likewise if you picked features that use JavaScript.
4. Tweak it, run it. To ship it you have several options, but an easy approach is to add this to your build.gradle file (if you're using Gradle):
and then invoke the dockerPush build target, either from the CLI (./gradlew dockerPush) or the IDE. You can also compile it to a standalone Linux executable with another build target. That's all there is to it. I don't think Rust improves on this situation. Note that the above instructions work on any computer in the same way, including Windows, with no additional work required.
You're behind the times. Write a web app using a framework like Micronaut, Spring Native or Quarkus and you'll get a native Linux EXE out of the build system that starts faster than a C program would (due to pre-initialization).
Not that installing Java is all that hard. apt-get install openjdk is sufficient.
Last I checked there was a significant disadvantage to using rather basic Java JIT code in a cloud environment: slow startup time and poor initial performance in terms of requests per second & latency meant scaling on demand didn't work very well. I suggested we move to GraalVM and AOT compilation on that project but we just ended up over-provisioning by a significant factor to smooth things out.
The problem is friction. To run a rust app you can just run it. To run a java app you need to install java first. This is no problem for backends running on servers, but client apps (like Minecraft) have to include their own JVMs to reach a wider audience, and this solution still introduces a bunch of complexity.
> it is a matter to actually care to learn about their existence
That's kind of the whole point I was trying to make above; if one language makes something super easy to do without having to look for instructions compared, that makes a difference in terms of how people will decide whether to learn it. Individually, lots of small quality of life things add up and can make a language that otherwise would be unapproachable way easier to get started with than languages that don't prioritize that sort of thing.
If someone has two choices that can provide the same output, and one of them makes it more effort to figure out how to do, then people will do the other one more often. There's no inherent virtue is spending effort to do something equivalent. It's not clear to me why you seem to think that pointing this out deserves a sarcastic response.
Yeah, sorry. I was aware of that but was being loose with words. I do think that part of the appeal is rust feels more bare metal and direct to people even if they're using heavy abstractions as compared to Java/kotlin or C# programming.
Strongly agree, yet debates about improving the ergonomics of the language, for which there clearly "is a market", seem to be hindered by those zealous activists. A minority, I'm certain, but vocal nonetheless.
It really can be small stuff too, like hiding that nested generic "GC adjacent" type salad to be accessible only if you need it, via a type alias. Yes, I can define that myself, but the point is that a lot of people need it often, given its widespread use.l, so why not provide one?
I'm sure there's reasons not to do the above example, but that's not the point. It feels like Rust is at 95% of being amazing, and that the remaining 5% is attainable if we want to.
I used to think that it was more likely that Rust would "expand upward" to provide the higher level syntax that people want in a language like I describe above, but it does seem like the language development has vastly slowed down in terms of big new features. I don't necessarily think this is a bad thing; plenty of people didn't like how much churn there was in the first several years after Rust 1.0 came out. I personally didn't mind since I never ran into any significant breakage in what I worked on, but I definitely noticed a change in how open companies seemed to be to use Rust in any capacity coinciding with Rust's releases growing smaller on average; I think "frequency of major language features" is often used as a proxy for "language maturity".
At this point, I think a new language is more likely to provide this niche than Rust, and I also don't think that has to be a bad thing. Having Rust scoped more to lower-level programming where you're more worried about things like minimizing heap usage and avoiding copying strings or whatever rather than trying to be all things to all users might end up with a better experience in the long run.
I was a bit dubious about this point of view before reading the full post, but wow, the last couple paragraphs lay it on thick. Suing someone for using your open source product in their own product takes "courage"? Comparing the work of developing Wordpress to Rodney King? I want to give the author the benefit of the doubt, and maybe I'm too cynical, but this sounds even more corporate-y than a lot of stuff I've read on company-hosted blogs.
I don't think there was a comparison between WordPress and Rodney King. If so, what is the comparison being claimed. Is WordPress the cops? The one saying can't we get along?
The way I read it at least, it was a simple reference and sentiment, not a comparison.
There is no comparison. there is a reference. A reference is not a comparison.
You also didnt answer my question. You are claiming there is an inferred comparison. If so, please explain what two things are being compared, and in what way.
Does the reference have no interaction with the rest of the blog post?
If I mention that I have a big idea and then mention how a great man also once stated “I have a dream”, do you think I’m referencing MLK with no inferred comparison to my idea?
Referencing something in light of your previous statement out of nowhere for a public blog post is a comparison. If you think this just happened randomly and was completely divorced from the previous context then I struggle to understand how to communicate with you
He made an inferred comparison between his situation and Rodney King, that’s the comparison.
The favorability of the comparison in a positive or negative note doesn’t matter to me. Comparing staying at a job or not over layoffs, to a momentous civil rights events in our living history is not acceptable in my opinion
I guess not. in my mind him and Rodney Kind both having lived situations does not constitute a comparison. Situation is far too vague. There needs to be some feature that is inferred to to be similar.
George Washington once was alive, and people are alive today too.
Take everything you said and then question why the author even referenced the situation.
If you agree that there needs to be a stronger connection to establish a connection, then why even mention the situation? Do you think the author did it out of complete randomness?
They are making a point about conflict, human nature, and that people don't get along. They are using a quote about it and attributing it to the source.
>“Can’t we all get along?” We couldn’t then, and we aren’t, now.
That doesn't mean they think they are a victim like Rodney King. That doesn't mean they think someone is like the cops.
I think the point is that conflict exists as part of the human experience. You can acknowledge that it exists and move forward. It isn't useful to dwell on what could have been.
Again, what do you think they were using the quote to say. Are they claiming to be a victim like RK? You seem to not like it, but have no clue what you think the implication is, which strikes me as wild. "I dont know what you are saying, but I dont like your meaning"
By using a concrete example of a specific point in time, they are conflating that example with their own.
If he had just said something like “Can’t we all get along” without invoking the history, it would not have brought along the comparison.
To put this into the context of software, and assuming he did not intend the conflating, it would be like he invoked a function that updated state on a specific field that he wanted to change, but the function updated other fields as a side effect.
The most charitable interpretation is that it was an accident and he just reached for the first similar example he could think of, the least charitable is that he actually does think it’s the same.
My opinion is that the situations are so wildly different that it was an inappropriate example to invoke
I think the words popped into his head as a fact of life, and he correctly attributed it to the author.
I don't think any completion occurred because I wasn't confused about the two of them being the same. I don't think you were confused that they were the same either.
I don't think there are people out there that would read the paragraph and walk away thinking the author is just like Rodney King. Therefore I think taking offense for possible conflation on behalf of others is overly cynical.
Fair enough. It still feels shoehorned in to me though, almost like an essay in school students are told to include a quote in their conclusion (something that happened in my English class at least once from what I can remember), which just adds to the vibe of this being "homework" to support their employer rather than coming across as authentic.
I notice that on September 28 (near the top of the list, since it doesn't seem to have anything for today yet) the same Pitbull song was detected separately a little less than an hour apart, and I can't help but wonder if it was the same person listening to it on loop. Several months ago, my fiancee and I overheard someone driving outside blasting Adele's "Someone Like You" from inside our apartment, and every 45 minutes or so we'd hear it again, so we couldn't help but assume it was the same person driving around the city with it on loop, probably going through some rough breakup or something.
I wonder what the chance of the birthday paradox affecting the music is. Given that y song will make up x% of plays, how likely is it that any song has two consecutive plays, or two plays within an hour?
Good point! This definitely seems like a candidate for that sort of thing, and that's before even talking about the distribution of of music likely to be played isn't at all uniform.
> I have never looked up and played Drake or Taylor Swift, but they come up in "curated" playlists thought-provokingly often.
That's not necessarily due to payola or whatever - both Drake and Swift are very talented as well as prolific and among the best operating these days, even if they are pop artists. It's not strange to see them recommended algorithmically if the listener is into modern music at all.
I almost exclusively listen to music from before the 90s, and Spotify has never once tried to play me anything from either of those artists, so that seems like a more likely explanation to me.
That's pretty unfair. Drake's writing abilities are questionable, but his ability as an performer is undeniable. Swift is well known for both writing and performing and her popularity speaks to her skill and many years of effort.
Were they helped by having wealthy parents and breaking into the industry young? Certainly. Is that the whole story? Definitely not.
Btw, this is just your opinion because I honestly don’t think either of them are very good. It’s not even the genre of music I prefer to listen to.
Just wanted to point out that everyone has different tastes and your stating as fact that they’re both talented is as valid as me stating as a fact that they’re both untalented.
I don't normally listen to them or like their music either - but I can still recognize objectively that they have talent. It's not a matter of taste, just recognition of the obvious fact that being an extremely popular artist means that you're talented, no matter what pretentious haters say about it.
Think of _every_ popular actor or artist. Do you think every one of them is talented? If so, it sounds like you’re basing your understanding of talent on what other people think.
There's absolutely no way an untalented person would be so hugely popular over such a long period of time. I think people tend to dismiss the popular with elitist narratives about how the commoners can never understand real art far too often. It's simply arrogance and refusal to accept reality to deny them. Not every talented person succeeds, but every person that succeeds must have talent. Luck and backing aren't enough to make art massively popular.
The growing irrelevance of the filesystem for the average computer user isn't a ln "epidemic" any more than the obsolescence of the CLI for the average person; it's just additional progress towards adapting computers to be more human-friendly rather than the reverse. We didn't used to have cars with automatic transmissions, cruise control, or blinkers that turned off on their own once you've finished turning, and no one describes the evolution evolution of the way we drove cars from decades ago to the present in the same language as a public health emergency.
Over time, technology becoming simpler to use by parts of the interface getting pushed down into implementation details is a good thing for the vast majority of people in the long run, and it's important for those of us who are technical not to mistake the requirement of a certain feature for the ability to access it. I think the biggest concern with the dominance mobile computing isn't that users might not need to know about the filesystem but that users might not have control of their own devices in the long run if the ability to access the filesystem is removed. There's precedent in getting support from the non-technical public to care about technical details when they understand how it affects them (e.g. right to repair, the pushback against SOPA/PIPA, net neutrality), but I that we'll miss the window to influence things similarly here if we focus on the wrong thing.
reply