Can an insurance company deny claim based on your DNA? They deny claims for pre-existing condition that you hid from the, which would be the wrong thing to do on your part. They cannot deny claim based on pre existing disposition. Practically everyone is predisposed for getting cancer by merely being human, you might even have cancerous cells in your body right now, that you body will destroy in a couple of minutes.
While Nintendo might not lose the trademark entirely if they don't sue, they could risk weakening its strength, therefore they have to sue in this case.
Consistent inaction against infringers can lead to the public perceiving the trademark as less distinctive. This can make it harder to protect the trademark in the future, and can encouraging further infringement.
I'd fathom to guess that it's not even worth Google's time to replace a ten people team. It's probably just a KPI sent from the top -"Replace a few people to earn your bonus this year". Constructing useless KPI, when you cannot come up with interesting ones.
And specifically, Brassica oleracea shows up against lead, though no amount is given in that table. The referenced webpage [1] mentions 0.1% to 3% (!) of dry weight, though that's in relation to hyperaccumulators generally.
This article [2] gives an indication of what high concentrations may look like in Brassica oleracea, though I still don't have much idea of what those numbers mean to a human being eating the plant. Considerably less than the 10,000+ mg/kg seen in other plants, thankfully.
Humans have great capacity to learn from our mistakes. Our source code or DNA have no encoding related to running business in a certain way. We mourn old google the revolutionary place, the likes of which could not have existed 100 years ago. But we forget that it was such a revolutionary place that its mere existence was an anomaly of sorts, and also that it spurned us to create several such new places, and that learning will continue us to create many more.
> US taxpayers should be directly paying for these studies to prove and disprove medicines
I think you are right people should pay for it? I could pay for it, but who would run the study? Are they going to be efficient? Or is it going to be a waste of money? Could EU pick this up? They are trying to be a bastion of science.
Is it going to be efficient taxpayers pay via increased Medicare and health insurance costs to market/lobby and pay for profits a private company needs to gamble that much on developing a medicine, that may not even be that effective?
Drug companies have the second or third highest profits and profit margins after tech companies. Sounds like a lot of “efficiency” could be gained by taxpayers taking the risk of R&D themselves.
The EU is not close to being a global bastion of science by any measure relative to the US or China afaik. EU doesnt come close in terms of funding nor papers published nor any other measure that I could think of.
Exactly this... my time and money is more important than the cost to the planet. I earn enough to not care. Recycling is a scam. I do not even know what reduce and reuse mean! My car has a bumper sticker that says "SAVE THE PLANET", with the subtext "without impacting my lifestyle". /s
Spent about 2 hours going to three stores and trying 7-8 pairs, standing in front of trial rooms, talking to cashiers about sizes in the system... before I got the right size and fit.
Could have easily saved BOTH money AND time if only I knew how to patch them. They only tore near the crotch (I am suspecting planned obsolescence).
See also advertising. C++ and Java had enormous advertising budgets, while Common Lisp had virtually none. For years, virtually every programming book and magazine was touting C++ and then later Java. Every conference, every keynote, everything a CTO might ever read or notice was telling them to use C++ or Java.
C++ itself never had a marketing budget! The nearest you might find is marketing for implementations back when people paid for programming languages, but the only surviving one of those is really Visual Studio.
Lisp has had decades to break out of its niche if it delivered a really advantageous solution, but somehow that never happened.
> Lisp has had decades to break out of its niche if it delivered a really advantageous solution, but somehow that never happened.
I think a huge part of it is that it is not immediately obvious that one needs what Lisp offers, and by the time the system has grown to the extent that the need is obvious, it has also grown to the extent that one no longer sees the fores for the trees. One doesn’t think ‘oh man, I need garbage collection’; one thinks, ‘oh man, I need to manage malloc and free better!’. One doesn’t think, ‘oh man, dynamic scope would really fit this problem well’; one thinks ‘oh man, I need dependency injection.’ Peter Norvig famously noted that 16 of the original 23 design patterns were invisible or simpler in dynamic languages such as Lisp†. Heck, there was a time when one couldn’t rely on recursion, or even conditionals! But the programmer who has managed to get stuff done without recursion, or without conditionals, or without macros doesn’t really see the point. He’s even worried: those things may add too much expressivity to the language. Why, folks could write unmaintainable code with them!
Of course, folks write unmaintainable code without them, too …
Anyway, I think a huge issue is one of education and experience. Ours is a massively growing field. The vast majority of folks are juniors, and don’t know any better; a portion of their education was miseducation. The seniors often have one year of experience, twenty times (rather than twenty years of experience). Objective standards are rare to nonexistent. Norms and standards are absent.
But yeah, when I’m working on a large project in a language other than Lisp, I often think, ‘man, this would be so much easier in Lisp!’ or even ‘man, this would be practical in Lisp!’ (because anything is possible in a Turing-complete language …).
Yes, and whose advertisements do you think show up in every single one of those magazines? Which implementations get mentioned by every single C++ book? Which organization sponsored every single C++ conference? Don’t forget that they had stiff competition from the advertising budgets of other large companies, such as Oracle and IBM.
Also, don’t forget that Lisp machines were once the most coveted development machines on the market. But Symbolics had to develop not only the language and IDE, but also the OS, the hardware, the microcode, and everything else all at once. It’s pretty telling that they soon began running Unix (on a separate processor) and then their next product was an add–in card for an Apple Macintosh II containing a Lisp processor ASIC. By then the C++ hype train was gathering steam and the AI winter had begun. Symbolics didn’t survive, and their direct competitor LMI had even less chance. So it’s not that Lisp offers no advantages, it’s just that market conditions killed off the companies that were offering it. Note that these market conditions were created by advertising and shifting public perception.
I thus return to my thesis, which is that the market success of a language has little, if anything, to do with the advantages of the language. Instead marketing and advertising rule the day.
> But Symbolics had to develop not only the language and IDE, but also the OS, the hardware, the microcode, and everything else all at once
This was forty years ago. Doing LISP advocacy like this just makes people sound like they're that Japanese guy who refused to surrender until the 1970s. The world has moved on; there have been other opportunities; and LISP has not won them either.
The timeframe doesn’t matter. What matters is that C++ triumphed not because it was a better language, but because it was sold better. It had better advertising.
I was going to disagree with you, until I re-read and noticed the word "intrinsic".
I would say that language popularity is highly correlated with the actual usefulness of the language. But "actual usefulness" covers far more than the "intrinsic qualities" of the language. It also covers the scope and quality of the standard libraries, the third-party libraries, the available tools like compilers, IDEs, and debuggers (which may be third-party), available documentation and training, and people available to hire who know the language. Of those items I listed, the only parts that could be considered "intrinsic" are the standard libraries and the tooling that comes with the language by default.
Eventually you need to work with other people, and using a common time-shared or multi user session is unlikely. Now consider that lisp images generally can't be easily diff'd or merged.
And with that the edit-and-continue paradigm loses much of its value. If you have to commit changes to a shared source file anyhow then you'll be not much worse off with debugging a core dump.
People say this a lot, but they fail to take into account that you can debug your server live as it continues to handle normal traffic. Even if you don’t deploy changes via the REPL, merely debugging the problem in a REPL without restarting anything is a huge win.
I don’t think that they do. I know that Erlang has something similar; you can reload a module and it will gradually replace the old code as processes are replaced. In principle you could debug a single thread in a C (or C++) program without stopping the others, and some IDEs will let you edit the code and recompile while the program is running (they patch out the old function definition so that it jumps to the new one instead), but good luck doing that in production.
But don’t forget that in Common Lisp, you can redefine classes at run time as well as functions. All existing instances of the class will be updated to the new definition, and you can provide the code that decides how the old fields are translated into the new ones if the default behavior is insufficient. Good luck doing that in C or C++.
My favorite story involved a race condition that was discovered in the code running on a satellite, after it had been launched. The software on the satellite was mostly written in Common Lisp (there was a C component as well), so they opened a connection to the satellite, started the REPL, debugged the problem, and uploaded replacement code (which obviously added a lock or something) to the satellite all through that same REPL. While the satellite was a hundred million miles away from Earth, and while it kept performing it’s other duties. You can’t do that on a system which merely dumps core any time something unexpected happens.
> During that time we were able to debug and fix a race condition that had not shown up during ground testing. (Debugging a program running on a $100M piece of hardware that is 100 million miles away is an interesting experience. Having a read-eval-print loop running on the spacecraft proved invaluable in finding and fixing the problem.
examples please, because so far i have only seen this from common lisp and smalltalk. there is also pike where i can reload classes or objects at runtime, thus avoiding a full restart, but it's not as closely integrated as in smalltalk and you actually have to build your app in a way that allows you to do that.
It's not always usable, but Visual Studio offers this for C# (works most of the time) or C++ (works in fewer cases because of the terrible header model)
how do you do it? in the browsers debugger? maybe, but that is not integrated with your actual source files, so you have to be careful to track your changes and copy them to your source. that may help in some cases but isn't really practical.
But that’s not the same thing at all. If you’re debugging an exception in Java, you cannot continue execution as if the exception had not been thrown at all. With Common Lisp’s condition system you can.
The question was whether you can debug a live service while it's handling live traffic. Not whether you can fix it. Java can definitely do the former, and definitely can't do the latter.
Your question moved the goalposts. Making change to a running system wasn't part of db48x's claim to which dleslie responded. It was explicitly excluded, in fact.
ok, fair, but what i am asking about is a feature of common lisp (and smalltalk or pike), so i didn't pay attention to the exclusion. that was not deliberate. my bad. (maybe you could say i moved the goalpost back to the original topic)
Umm.. you can throw an exception, you can return to previous call frame, you can reload modified classes. If you want unlimited code modification, you can use dcevm https://github.com/dcevm/dcevm
The classic lisp way is to build a runtime image by editing the image while running it, then dumping a binary. You never specifically need to load a source file.
But you can't easily collaborate with that style of development.
With lisp you typically develop in source files, versioned with Git. The same as any other language. Source files and live development are not mutually exclusive. SLIME can send code snippets from your file over to the REPL for live development. You have your cake and eat it too.
The REPL (or scratch buffer) is typically used for testing/observing. Not the actual source code development. Although it is possible to never write your source code to a file if you're just playing around with a toy experiment.
> The classic lisp way is to build a runtime image by editing the image while running it, then dumping a binary. You never specifically need to load a source file.
I'm not a huge fan of lisp, but I do like the language. Unfortunately, it's way too hard on 90% of the developers. They need some more structure so they can think about one thing at a time, which is why C-like syntax won in the end.
All the best developers I know are into Lisp or Haskell (or both). They can crank out ridiculous code which then goes unused because maintenance would be too much of a burden. Sometimes I write some really complex one-liners (which are like 5-10 lines long) to do some tasks using all the possible hacks to avoid having to type an extra character. I might be able to do that, but most developers wouldn't be able to see how the data get transformed and keep all of that in their mind. Whatever I wrote is unmaintainable by most people.
The reality is that the majority of people can't grasp their mind around complex concepts. Which is ok, most developers write a few API endpoints and some UI components, they don't need much to create value.
We can get some good concepts from the functional world and transfer them to C-like syntax languages though. We can even have some of the programmability of Lisp (but not all of it) via macros.
It probably didn't help that a bunch of key Lisp people were leaning hard into proprietary $80,000 minicomputers right around the time that commodity(ish) microcomputers were about to massively explode in popularity.
Lisp is more of a meta-language than a mere language. Since it's homoiconic, you eventually end up developing a domain-specific language that works great for your subject area. It also may make it a bit harder to onboard new team members, because the level of abstraction which you can reach can be relly high, all while keeping performance reasonable.
Technically, you could run e.g. a Python program under pdb, break on certain exceptions, and fix things inside a living system. It's just not a customary way to do that.
Only five years ago, CL's web presence was not attractive. This included "official" websites and online documentation (despite all the great books). It's better now (common-lisp.net was reshaped, there's lisp-lang.org, a better Cookbook, the CL Community Spec, more YT tutorials…)
there is no full-featured web framework (although you can write web apps of course),
no satisfactory GUI lib (now Gtk4, Qt5 (hard to install), IUP, nice-looking Tk themes, more low-level bindings to graphics libraries etc)
It's questionable whether it's really much better than just a debugger with a core dump (what I usually work on, it's not any better). It is, however, a pretty snazzy feature.
with a debugger, after you fix the application, you still have to restart and run it again. the big benefit here is that no restart is required.
smalltalk can do the same btw. i had been working on a small website where a specific request from the browser would fail. instead of sending a failure message the request would just hang. in the mean time on the server in my pharo smalltalk window an error would pop up. when i fixed the error, the download of the request resumed in the browser as if nothing had happened other than a delay.
> with a debugger, after you fix the application, you still have to restart and run it again. the big benefit here is that no restart is required.
We like to make sure everything running in prod is verifiably built from source in-repo. So that's the thing, while it's a really snazzy feature for sure, the value over the rest of the world is on the questionable side. At least for our use case, but I think it's true for most use cases.
edit: Also really curious about your smalltalk and pharo experience. Sounds fascinating!
once you fixed the problem you of course commit the change to the code on disk. there is nothing in the workflow that prevents you from doing that. you are not going to just fix apps in production without running your tests and what not. at worst you fix an error in a production system, and run the tests afterwards to make sure everything is clean. but mostly this feature is used during development when your code is still incomplete. not having to restart every time there is an error simply speeds up your development loop.
i am really just a beginner with smalltalk and CL. as a vim user i didn't really have a good integration of the CL repl with the editor (there were tools, but they weren't as straightforward to set up as slime would have been). and when i encountered the breakloop i didn't really know what do to and just tried to get out of it as quickly as i could. (exiting vim is easier ;-)
the thing that bothered me was that when i change code in the repl without an integrated editor, then how do i keep track of the changes and make sure i don't loose them? but then, i just never tried to set up a proper environment.
in smalltalk on the other hand you get a nice IDE with all the comforts of a GUI. you have your windows where you browse your code neatly structured in classes and methods. there is a window where you run your app and manage your tests which light up red or green if they fail or pass, another which logs error or other print messages, and if an error happens while an app is running a new window pops up, showing you a trace of what was running and a text field with the code that failed, like in a debugger, and right there you can edit the code and resume.
the code is written to your class, and when you go back to your code browser the change is reflected there, and you can commit it to a version control system. pharo btw has pretty good integration with git, and already a few years ago it almost acted like a git gui. it's probably even better now. the primary downside is that the text editor in pharo is simple, like a browser text area, and not a sophisticated editor like emacs or vim.
> [Lisp] the thing that bothered me was that when i change code in the repl without an integrated editor, then how do i keep track of the changes and make sure i don't loose them
> [Smalltalk] the code is written to your class, and when you go back to your code browser the change is reflected there
I feel your pain. "writing the changes back to the source code definition" seemed like a no-brainer desirable feature of a Lisp REPL, yet I could not find a way to do that out of the box using Slime. I'm sure one could program it, however! Bet someone has...
> ... yet I could not find a way to do that out of the box using Slime.
Here's what I use: edit code, save file, tell slime to eval current defun. I haven't yet suffered indiscipline to hook `slime-eval-defun` to call `save-buffer`. Would that work for you?
In my experience, it's definitely better for prototyping because if you hit an error that is difficult to reproduce, you can update your code and try again, without having to try and create reliable steps to reproduce the problem.
Yeah I can see this being a pretty handy feature for prototyping. Otherwise you'll need to, like, catch errors in your main loop to ensure you don't have some program-halting issue while you're working.
It’s the issue. The ever popular syntax issue continues to haunt it.
CL has had no real “unknown unknowns” for a very long time. While folks who newly discover it feel they found the gold idol in the jungle cave, the cave is, in truth, well explored, mapped, and documented but the idol is left behind.
All excuses to not use CL have long been, or have had the opportunity to be, addressed. Today, it’s fast enough, small enough, empowered through utilities and libraries enough, has different build and deployment scenarios to work with a vast array of applications. And yet here we are...still.
ABCL runs on the JVM, which runs everywhere on everything. Clojure, first class system on top of the JVM, but no real adoption. Some, to be sure, likely (I have no data) more than CL itself. But it’s still an blip on the radar.
Meanwhile, a bunch of hackers threw together a language sharing many aspects of the core feature set made popular in Lisp and Scheme runtime environments, made it look like an Algol step child with curly braces and everything, and since then an entire ecosystem of software has been written (and rewritten) into this system and it’s runtime is the focus of some of the largest companies in the history of civilization.
Raise your hand if you think that if the creators of JavaScript went with an S-expression syntax instead of a C/Java derivative, we’d be running a VBA clone in our browsers (but nowhere else)?
Because at this juncture, THE thing that distinguishes CL and other Lisps from where we are today, is the syntax. Every other charm these systems enjoyed have been cherry picked away.
Advocates say the syntax is not an issue. It’s a feature m, not a bug. But the “wisdom of the crowds” has spoken, and they stay away.
It's complicated, but there are likely some main contributing points.
1. Computing is extremely blinkered. More than in other professions, people in computing are either unaware of what has been done before. If they learn a little bit about what has come before, they look for reasons to be dismissive of it, so they can turn their attention away. They know a few things (languages, platforms, tools) and just live in that world.
2. At any given time, a small handful of things are popular. This changes over time. The amount of stuff we have produced in computing vastly outnumbers what is popular. It's like a game of musical chairs in a packed sports dome, where there are seven or eight chairs on the floor.
3. The field is still growing; there are probably more people who joined in the last 10-15 years, than those who joined the field before that. Almost every newcomer plunges into whatever is popular at the time, and will never look at anything else unless it is new and popular, which will happen at most some 3-4 times in their career before they are out.
Those are generalities. Then there are Lisp specific historic items.
Lisp specifically had a bit of a heyday in the 1970s and into the 80s. People developing Lisp systems were very ambitious and their work eventually demanded hardware that only big companies and well-funded institutions could afford. They did fantastic things, but Lisp did not scale down to the emerging single-chip microcomputer with a small memory (or not in that fantastic form). Typically, Lisp would have liked a few megabytes of RAM compared to tens or hundreds of kilobytes.
The microcomputer was something new and popular, bringing with it new people who had nothing but microcomputer experience. Most of them didn't know anything about Lisp other than reading about it in books or some magazines like Byte and Creative Computing, which is something that only a curious minority would engage in.
Eventually, consumer microcomputers became powerful enough to run Lisp well, but by that time, the people who remembered Lisp were vastly outnumbered by new people.
(Speaking of memory sizes, GNU's implementation of Lisp, even, GNU Emacs, was absolutely derided for its memory use, well into the 1990's. For instance, one joke interprets its name as an acronym for Eight Megabytes And Constantly Swapping (EMACS). Eight! Not Eight Hundred or Eighty. Eight megabytes is ridiculous today; the resident size of a Bash process can easily be that.)
Another problem with Lisp is academia, which has played a role in actively destroying interest in Lisp.
After the downturn in Lisp popularity, schools continued to teach Lisp dialects, but often badly, leaving students with a bad taste. They used scaled down dialects not suitable for software development work, like certain Scheme implementations, and gave students assignments that focused on doing things with recursion and lists, and other nonsense that is far removed from making a text editor or game or whatever.
This practice is still continuing. If you follow the [Lisp] tag on StackOverflow, you will notice that from time to time, students post Lisp homework questions. E.g. "we are required to write a recursive function that removes matching items from a list". The comments will say, there is a built-in function remove, why don't you use that. The student will reply, oh, is that right? But, in any case, we are only allowed to these five functions: cons, car, atom, ... and we can't use loops, only recursion.
Never in a programming course that used C, or Modula-2 or whatever have I had homework forbidding me to use any language statement type, operator, or library function! This is purely a Lisp teaching problem, and it leaves students with wrong ideas and impressions. They might misremember things and spread misinformation like "Lisp has only linked lists and nothing more, and everything must be done with recursion; it is useless".
I used to be hardcore Opera fan, and this reminds me of something they did back in the days when Jon von Tetzchner was over there. They released "Opera Bork" edition https://press.opera.com/2003/02/14/opera-releases-bork-editi... because MSN was sending a broken website specifically to Opera, so they spoofed the user agent.
I came to the comments precisely to post this exact link so delighted to see someone beat me to it.
Opera were always so good at giving a window into the world of compatibility testing & challenges - @hallvord in particular had a great blog on my.opera about their work subverting large websites that would block or thwart Opera & other non-IE browsers.