Hacker News new | past | comments | ask | show | jobs | submit | svieira's comments login

While they are nice, they don't block the event loop. That's definitely a feature you would need if you're aiming to replace `alert` and friends. As an alternative, yeah, that's a pretty reasonable API.

    ma non eran da ciò le proprie penne:
    se non che la mia mente fu percossa
    da un fulgore in che sua voglia venne.
    A l'alta fantasia qui mancò possa
    ma già volgeva il mio disio e 'l velle
    sì come rota ch'igualmente è mossa,
    l'amor che move il sole e l'altre stelle. 
----

    And my own wings were far too weak for that.
    But then my mind was struck by light that flashed
    and, with this light, received what it had asked.
    Here force failed my high fantasy 
    but my desire and will were moved already —
    like a wheel revolving uniformly — by
    the Love that moves the sun and the other stars.

This is how you get FIPS 140 [1], which for those not in the know is a US Federal standard that mandates encryption which is _less_ secure the the current state of the art and has been for decades. (Yes, there's a new version which was approved 5 years ago and which is still rolling out [2]).

[1]: https://en.wikipedia.org/wiki/FIPS_140

[2]: https://csrc.nist.gov/Projects/fips-140-3-transition-effort


At the same time, turning on FIPs mode is the way we discover that some of our modules were using MD5 in security critical places. Because the government actually enforces FIPS, people (primarily Red Hat I think), now actually put in the bare minimum of engineering effort so that when you set fips=1, the system will actually enforce the policy (unless you go out of your way to override it, or use a non distribution provided crypto stack).

Sure, now that the infrastructure for this has been built, it can be configured to require stronger crypto then FIPS does, but that infrastructure would never have been built without the likes of FIPS, and the government mandating it's use. And I know this because even with all of the hard engineering work done of building that infrastructure, there are no commonly used stronger policies; because the only people who actually care are the ones forced to care by the likes of FIPS.

Our electrical standards might not the safest way of wiring buildings, and not what we would come up with if we wrote the standards today. But they are orders of magnitude safer then what electricians would be doing without the standards.


What did you learn about (by using?) DDD that made you decide strongly against it?

DDD advocates for creating a shared lingo across everyone involved, including customers if they are highly embedded in the design process. This I agree with.

Ultimately DDD attempts to create real-world analogies in code; you know, dog inherits from pet inherits from mammal etc. In my opinion, this approach to OOP easily ends up creating code that is difficult to reason about. Probably because real-world things often have many responsibilities. Code becomes especially confusing when you have dozens of methods on domain objects that interact across several domains: the system-wide control flow becomes extremely complex. Now add outlier code/hacks, likely written to meet an unrealistic deadlines, and things rapidly become completely incomprehensible.

And there's more that's hard to put to words. I code for the love of it, and I truly hated every moment working in DDD code. That was I completely novel experience for me: I'm fine with boring work (it has to happen), but DDD just hit very differently.


This sounds like DDD done wrong. Just because two concepts have the same name doesn't mean that they are the same thing. Drawing the boundaries of the bounded contexts is hard though, which is why shops often struggle with DDD.

For example if I'm building a pharmacy system a prescription means something to a patient, but also means something different (but similar) to a fulfillment team member. The prescription might have a prescriber, and its important for the patient to know the name, address and contact information of the prescriber. But for fullfulment purposes I don't care about the address or phone number, just the NPI, full name and title for labeling purposes. This doesn't just extend to data, but to actions, a patient can't "ship" a prescription and fullfulment can't "renew" a prescription. In a DDD model these should be two separate objects.


I am very sorry for your loss.

> And I guarantee you would too

Please don't presume.

Having just buried my mother last year after caring for her as the cancer literally ate her bones away ... no, I did not and I still would not choose suicide over waiting for the end.


https://github.com/redis-rs/redis-rs/issues/1419#issuecommen... - looks to be mostly resolved at this point, with Redis Inc. simply going to step up its contributions to the open-source version without taking control.

Thank you antirez, mitsuhiko, and mortensi for working to resolve this amicably!


Arcan is experimenting with something like this (among others): https://arcan-fe.com/2024/09/16/a-spreadsheet-and-a-debugger...

See also:

* NuShell (https://www.nushell.sh/)


The Machine Stops by E. M. Forster is another very good one:

https://www.cs.ucdavis.edu/~koehl/Teaching/ECS188/PDF_files/...

And re-skimming it just now I noticed the following eerie line:

> There was the button that produced literature.

Wild that this was written in 1903.


It's such an amazing short story. Every time I read it I'm blown away by how much it still seems perfectly applicable.


The key point:

> Seeing as the great majority of students spend over 80% of their digital device time using these tools to multitask, the automatic response for a great majority of students using these tools has become multitasking.. Unfortunately, when we attempt to employ digital devices for learning purposes, this primary function quickly bleeds into student behavior.

> This is why, when using a computer for homework, students typically last fewer than 6 minutes before accessing social media, messaging friends, and engaging with other digital distractions. This is why, when using a laptop during class, students typically spend 38 minutes of every hour off-task. This is why, when getting paid as part of a research study to focus on a 20-minute computerized lesson, nearly 40% of students were unable to stop themselves from multitasking. It’s not that the students of today have abnormally weak constitutions; it’s that they have spent thousands of hours training themselves to use digital devices in a manner guaranteed to impair learning and performance. It’s also that many of the apps being run on those devices were carefully engineered to pull young people away from whatever they were doing.

> And perhaps this is the key point: I’m not saying that digital technologies can’t be used for learning; in fact, if these tools were only ever employed for learning purposes, then they may have proven some of the most important academic inventions ever. The argument I’m making is that digital technologies so often aren’t used for learning that giving students a laptop, tablet, or other multi-function device places a large (and unnecessary) obstacle between the student and the desired outcome. In order to effectively learn while using an unlocked, internet-connected multi-function digital device, students must expend a great deal of cognitive effort battling impulses that they’ve spent years honing - a battle they lose more often than not. (of course schools do often try to implement blockers and restrictions, but this opens up an eternal cat-and-mouse struggle, and the mice are very good at finding ways to evade the cat.)


Really jarring reading that. I think I was in middle school when the AOL and the "internet" to me became a thing (lol) and sure there was a lot of time wasting stuff (chatrooms, games, etc.) but there was a huge huge field of just exploration and learning. I cut my tech teeth on that; minimal parent supervision, no gamifying or artificial motivators, just my curiosity.

I feel for kids nowadays. It was the wild west back then, everything was basically unrestricted and nobody had any clue of the consequences, but we didn't have companies actively trying to addict us to stuff.

No idea what the answer is.


The interesting follow-up here... there is no reason these effects should be restricted to children. Like, if children can't learn with devices in a classroom, it suggests executives can't learn in an office (and might give a hint as to why we haven't seen expected productivity benefits driven by it).

But again, if the effect was this strong, I'd really expect to see broader evidence (even just at a national level based of digital uptake).


These authors have big Google Docs of evidence, https://jonathanhaidt.com/reviews/. But if you read it, you will see the effect is (AFAICT) limited to certain populations. There is a significant fraction of students that do have trouble with executive function and staying on task and will fail to do their homework because of social media access. Then there are the other students that have no trouble staying off social media when they have to do homework.


Part of this is because the pre-frontal cortex (associated with logic, will power, discipline, focus, etc) doesn't finish developing until about age 25.

Until then, folks can be reliant on the adults in their environments older than that age, if they haven't built up some abilities.

https://pmc.ncbi.nlm.nih.gov/articles/PMC3621648/


> (and might give a hint as to why we haven't seen expected productivity benefits driven by it)

I'd be shocked if that's not a significant part of why. Most folks will get more work done when their only alternatives are trashcan basketball or doodling, versus... the Web.

I suspect another cause is that a great deal of application of computer technology in organizations aims to improve a certain kind of legibility of processes, which is something management loves a great deal, but the cost of attaining this legibility is high enough (including in hidden or hard-to-track ways) that any benefits are neutralized or all-accounted-for costs actually go up.

[EDIT] A third cause is probably that median ability to use computers remains very low among office workers. There continue to exist offices where knowing how to copy-paste(!) for more than just bare text, or extremely-basic spreadsheet use beyond "put numbers in it" makes you a wizard. I'm not kidding.


i'd love to have a distraction-proof workstation that would force me into my IDE or whatever design doc i'm working on and block out everything else.

but it's not possible because the job requires all these gateway-drugs-to-distraction to be on the forefront of your workspace: * keep slack open in case you're needed in that support thread * keep a browser open so you can google the api docs for something (that's how i ended up here right now) * keep spotify playing in the background so you can drown out the noise of the open office/work-from-home-noise



The effect is absolutely that strong, even on adults. My anecdata in IT overwhelmingly supports that claim. Be it educational institutions, large enterprises, SMBs, or just Mom and Dad with their cell phones, the proliferation of distraction boxes has reduced critical and rational thinking abilities that are foundational elements of learning. After all, why try to reason out what you could just look up online? And if you can get the answer somewhere quicker, well, now you can also skim Twitter or Instagram with the time you saved.

During my brief stint working IT for private schools, with their SMARTBoards in every classroom, Meraki APs blanketing their 300 year old campus structures, and Chromebooks in the hands of every student, the feedback I got was that students hated having technology always with them (to the point of breaking their Chromebooks on purpose), while teachers would deliberately not report broken technology (like their SMARTBoards) so they could force kids off of electronics and into a textbook or journal. Despite the often adversarial relationship of students and teachers, both cohorts acted unconsciously towards the same outcome of less technology.

This early experience has also informed my perspective on the role of technology in the workplace as a force amplifier rather than mandatory toolset. It’s why I’m often fiercely resistant to any “new” technology coming in that doesn’t solve a problem we’ve already identified, as blindly expanding the IT estate just adds to the noise of the enterprise and detracts from the signals important to business.

Even the younger folks (20-30) I find community with outside of tech spaces bemoan the over reliance on technology in general. They aren’t luddites by any stretch of the truth, and they love BlueSky and Instagram and TikTok and all the usual social spaces where their friends are, but they’ve engaged in more active resistance to technology as a necessary component in everything they buy. This same cohort is often an ally at work, because they seek to push products or solutions that remove technology interactions from the daily grind through automation, rather than dragging in the latest toys like we (millennials) did.


There have been consistent reporting for two decades that screens are leading to measurable reductions in attention span. Three decades of reports linking the internet and digital culture with mental health issues. What evidence is missing?


People with an Internet- connected screen appear to have a short attention span because they have instant access to a multitude of things that they're interested in, competing for their attention with whatever you want them to be focusing on.

That what you want them to be focusing on is no longer the path of least boredom like it was in the previous era; the path of least boredom goes through their mobile device.

People's ability and willingness to concentrate on something that interested in has not changed one iota. That sort of biological change takes hundreds of thousands of years of evolution.

Observations of the behavior of people interacting with tech can easily support the wrong argument that people's attention spans have increased. Just look at how somebody can play the same game for 11 hours straight, right?


> People with an Internet- connected screen appear to have a short attention span because they have instant access to a multitude of things that they're interested in, competing for their attention with whatever you want them to be focusing on.

This is incorrect. There have been repeated studies that show a distinct decline in individuals ability to consume and process long-form text. Folks brains are literally remodeling towards ADHD-like behaviors.


But that's more like a developmental situation in the individual having to do with their education.

I would expect, say, individuals not going to school past grade two showing a declined ability to multiply 12 by 11.

People's handling of long form text may be off from several decades ago, but it's still better than their illiterate ancestors 500 years ago.


It has literally nothing to do with education. It's the brain's own neuroplasticity responding to overstimulation:

https://longevity.stanford.edu/lifestyle/2024/05/30/what-exc...


I suspect it's not screens alone, but whether one is creating with screens or consuming with them. Different parts of the brain.


I'd be particularly keen on evidence affecting primary outcomes - eg, are people genuinly more productive or healthier. Attention span is an interesting metric, but if it doesn't directly affect how much work you can do or the quality of it in a meaningful way, I am less fussed.


Given reports on multitasking consistently show it degrades performance this kinda seems like a slam dunk?


Mokie Coke! Mokie Coke!

----

https://scifi.stackexchange.com/questions/218596/70s-or-earl... for those not in the know.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: