Hacker Newsnew | past | comments | ask | show | jobs | submit | terlisimo's commentslogin

well...

1) GoL is turing complete

2) there are algorithms that calculate digits of Pi or e.

so... yes?

but if I just took any old Pi-digits algorithm and encoded it on GoL, its appearance would not be meaningful or "elegant" to our senses. You're probably asking "what does the shortest/most elegant program to calculate Pi in GoL look like, and does it maybe have some unexpected relation to other mathematical terms like, I dunno, Euler's identity or... Mandelbrot set?" And then you would probably need to answer the question "Well, how would you like the digits encoded and represented?".

All of a sudden your question becomes a bit ambiguous. Or did I misunderstand what you meant?

I mean.... I think I feel what you're asking, like... is there some primal version of Pi that can be encoded in GoL initial condition with as few bits as possible but I'm afraid that the answer is something like "well, that depends on what you mean by [...]"


Adam P. Goucher's pi-calculator pattern is quite elegant, I'd say. It prints the decimal digits in-universe, with Conway's Life blocks for pixels.

https://conwaylife.com/wiki/Pi_calculator


yes you are right the question was ambiguous. WHat I was really looking for is if there is a way to physically derive PI out of some basic cellular automata operations. that does not necessarily have to involve circles etc. much less so about representing digits of PI itself. just that an pattern that evolves into ever more accurate values of pi (or a quantity derived from pi). this definitly must not be encoded or somehow preprogrammed into the GoL's initial condition. In fact I dont even care if the evolution rules of standard GoL are followed. just that SOME set of rules automatically produce PI-derivative.

the reason it would be cool is because then we've taken a most fundamental geometric constant and derived it from purely graph update type of mechanics. if you could do that then likely you can do a whole set of other things like physical laws.


One step closer to Managed Democracy.

It's an idea from a video game where AI is spying on your life 24/7 and infers who you would vote for so you never actually need to (or can) vote.


From a cynical point of view, the point of democracy is to get the people to blame each other - for voting for the wrong party, for not voting at all, for voting for one of the two big parties or for voting for the spoiler. And thus prevent revolution or murder of politicians.

Inferring who you would vote for doesn't accomplish this as well.


Reminds me of this short story by Asimov with nobody votes anymore just computer predicts the outcome. https://en.m.wikipedia.org/wiki/Franchise_(short_story)


What if the AI looked at all the data and voted for the representatives who'd be best to better their "master"'s life?

Although then, Zuck's, Bezos' and Musk's AI would still send out lobbyists to manipulate politicians to fuck us plebs over (Hah, the overpaid codemonkey thinks he's a pleb...).

It'll be a future when an AI orders escorts be sent to Supreme Court justices' rooms in the golf vacations paid for by the billionaire. "Data says, Justice [____] likes long haired blondes with nice buttocks, send Tiffany. She should bring beer, he likes beer.".


why not solve it with bash then, just put

#!/path/to/your/venv/bin/python

as first the line of your script, done/done


That is obviously not what I meant by "solving it with bash" and well you know it.

First, one often needs to set PYTHONPATH etc, and this is best done near the point of execution, in a wrapper script and not wangling around in ~/.bash_profile where it gets forgotten, and is not project-specific.

Secondly, and more importantly, your suggestion assumes the venv lives in a fixed location. This is unlikely to be the case.[1] What is preferable is something which is independent of filesystem location. The bin/run-python script is able to find its location on the filesystem, and the location of the venv relative to it.

[1] You might have a custom python distribution with a bunch of modules installed into a well-known location and therefore using that for the python in your application is a reasonable solution, but that is not what we are talking about here.


> ZFS is notorious for corrupting itself when bit flips

That is a notorious myth.

https://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-y...


Nvidia drivers are not open source.

Nvidia has open-sourced the kernel module "driver", which recently have been declared stable enough for general consumption.

But the kernel-mode driver is only a small part of what you would call the graphics card driver, and the bulk of it is still very much a proprietary blob.

The true open source driver for NV cards is Noveau[1] which works ok with older cards but is slow to support newest cards and features. Performance, power mgmt and hw acceleration are usually worse or not working at all compared to the official drivers.

[1] https://nouveau.freedesktop.org/FeatureMatrix.html


Thanks. All of that is correct.


I see. Thanks for explaining.


A personal anecdote:

One of my guys made a mistake while deploying some config changes to Production and caused a short outage for a Client.

There's a post-incident meeting and the client asks "what are we going to do to prevent this from happening in the future?" - probably wanting to tick some meeting boxes.

My response: "Nothing. We're not going to do anything."

The entire room (incl. my side) looks at me. What do I mean, "Nothing?!?".

I said something like "Look, people make mistakes. This is the first time that this kind of mistake had happened. I could tell people to double-check everything, but then everything will be done twice as slowly. Inventing new policies based on a one-off like this feels like an overreaction to me. For now I'd prefer to close this one as human error - wontfix. If we see a pattern of mistakes being made then we can talk about taking steps to prevent them."

In the end the conceded that yeah, the outage wasn't so bad and what I said made sense. Felt a bit proud for pushing back :)


[preface that this response is obviously operating on very limited context]

"Wanting to tick some meeting boxes" feels a bit ungenerous. Ideally, a production outage shouldn't be a single mistake away, and it seems reasonable to suggest adding additional safeguards to prevent that from happening again[1]. Generally, I don't think you need to wait until after multiple incidents to identify and address potential classes of problems.

While it is good and admirable to stand up for your team, I think that creating a safety net that allows your team to make mistakes is just as important.

[1] https://en.wikipedia.org/wiki/Swiss_cheese_model


I agree.

I didn't want to add a wall of text for context :) And that was the only time I've said something like that to a client. I was not being confrontational, just telling them how it is.

I suppose my point was that there's a cost associated with increasing reliability, sometimes it's just not worth paying it. And that people will usually appreciate candor rather than vague promises or hand-wavy explanations.


I had a similar situation, but in my case was due to a an upstream outage in a AWS Region.

The final assessment in the Incident Review was that we should have a multi-cloud strategy. Our luck that we had a very reasonable CTO that prevented the team do to that.

He said something along the lines that he would not spend 3/4 of a million plus 40% of our engineering time to cover something that rarely happens.


We've started to note these wont-fixes down as risks and started talking about probability and impact of these. That has resulted in good and realistic discussions with people from other departments or higher up.

Like, sure, people with access to the servers can run <ansible 'all' -m cmd -a 'shutdown now' -b> and worse. And we've had people nuke productive servers, so there is some impact involved in our work style -- though redundancy and gradually ramping up people from non-critial systems to more critical systems mitigates this a lot.

But some people got a bit concerned about the potential impact.

However if you realistically look at the amount of changes people push into the infrastructure on a daily basis, the chance of this occurring seems to very low - and errors mostly happen due to pressure and stress. And our team is already over capacity, so adding more controls on this will slow all of our internal customers down a lot too.

So now it is just a documented and accepted risk that we're able to burn production to the ground in one or two shell commands.


I hear ya, that sounds familiar.

The amount of deliberate damage anyone on my team can do is pretty much catastrophic. But we accept this as risk. It is appropriate for the environment. If we were running a bank, it would be inappropriate, but we're not running a bank.

I pushed back on risk management one time when The New Guy rebuilt our CI system. It was great, all bells and whistles and tests, except now deploying a change took 5 minutes. Same for rolling back a change. I said "Dude, this used to take 20 seconds. If I made a mistake I would know, and fix it in 20 seconds. Now we have all these tests which still allow me to cause total outage, but now it takes 10 minutes to fix it." He did make it faster in the end :)


Good, but I would have preferred a comment about 'process gates' somewhere in there [0]. I.e. rather than say "it's probably nothing let's not do anything" only to avoid the extreme "let's double check everything from now on for all eternity", I would have preferred a "Let's add this temporary process to check if something is actually wrong, but make sure it has a clear review time and a clear path to being removed, so that the double-checking doesn't become eternal without obvious benefit".

[0] https://news.ycombinator.com/item?id=33229338


Nothing more permanent than a temporary process.

When you have zero incidents using the temporary process people will automatically start to assume it’s due to the temporary process, and nobody will want to take responsibility for taking it out.


I agree with the implication, but don't think this applies here. The scenario here is a safety net, i.e. something that visibly "catches" errors, at a cost. If you have zero incidents "caught" during the evaluation period, then the evaluation result is that the cost isn't worth paying.

Obviously if you're planning to implement a vague deterrent-style solution which you have no means (or intent) of evaluating just to check a box, you're better off not doing it.


The infamous lion-repeling rock in action.


Yep yep, exactly this. When an incident review reveals a fluke that flew past all the reasonable safeguards, a case that the team may have acknowledged when implementing those safeguards. Sometimes those safeguards are still adequate, as you can’t mitigate 100% of accidents, and it’s not worth it to try!

I’d go further to say that it’s a trap to try, it’s obvious that you can’t get 100% reliability, but people still feel uneasy with doing nothing


> If we see a pattern of mistakes being made then we can talk about taking steps to prevent them.

...but that's not really nothing? You're acknowledging the error, and saying the action is going to be watch for a repeat, and if there is one in a short-ish amount of time, then you'll move to mitigation. From a human standpoint alone, I know if I was the client in the situation, I'd be a lot happier hearing someone say this instead of a blanket 'nothing'.

Don't get me wrong; I agree with your assessment. But don't sell non-technical actions short!


> You're acknowledging the error,

Which is important but not taking an action.

> and saying the action is going to be watch for a repeat

That watching was already happening. Keeping the status quo of watching is below the level of meaningful action here.

> if there is one in a short-ish amount of time, then you'll move to mitigation.

And that would be an action, but it would be a response to the repeat.

> I'd be a lot happier hearing someone say this instead of a blanket 'nothing'.

They did say roughly those things, worded in a different way. It's not like they planned to say "nothing" and then walk out without elaborating!


The abbreviated story I told was perhaps more dramatic-sounding than it really played out. I didn't just say "Nothing." mic drop walk out :)

The client was satisfied after we owned the mistake, explained that we have a number of measures in place for preventing various mistakes, and that making a test for this particular one doesn't make sense. Like, nothing will prevent me from creating a cron job that does "rm -rf * .o". But lights will start flashing and fixing that kind of blunder won't take long.


If you want to go full corporate, and avoid those nervous laughs and frowns from people who can't tell if you're being serious or not, I recommend dressing it up a little.

You basically took the ROAM approach, apparently without knowing it. This is a good thing. https://blog.planview.com/managing-risks-with-roam-in-agile/


Correct.

Corollary is that Risk Management is a specialist field. The least risky thing to do is always to close down the business (can't cause an incident if you have no customers).

Engineers and product folk, in particular, I find struggle to understand Risk Management.

When juniors ask me what technical skill I think they should learn next my answers is always; Risk Management.

(Heavily recommended reading: "Risk, the science and politics of fear")


> Engineers and product folk, in particular, I find struggle to understand Risk Management.

How do you do engineering without risk management? Not the capitalized version, but you’re basically constantly making tradeoffs. I find it really hard to believe that even a junior is unfamiliar with the concept (though the risk they manage tends to be skewed towards risk to their reputation).


Yeah. Policies, procedures, and controls have costs. They can save costs, but they also have their own costs. Some pay for themselves; some don't. The ones that don't, don't create those procedures and controls.


Good manager, have a cookie.


Honorable mention of ATI Radeon 9600 that was soft-upgradeable (via hacked drivers) to 9800 PRO for a 200% perf boost. Good times.


Actually it was the Ati Radeon 9500 non pro that could be modded into a Radeon 9700. Regarding the ATI Radeon 9600, there was a variant called Radeon 9550 that could be overclocked from 250 mhz to 400 mhz.


And my personal favorite, the Radeon 9100 which was actually a Radeon 8500 in PCI format instead of AGP. With a slightly tweaked 8500 Mac Edition bios you would have a very fast GPU for first gen PCI Macs through the Yikes G4. I believe some faster NVidia PCI cards ended up appearing and being made to run on Macs but I had moved from my XPostfacto PCI Macs to an Intel mini by then.


> Actually it was the Ati Radeon 9500 non pro that could be modded into a Radeon 9700.

Sometimes. At least some of those 9500s were binned parts that showed their broken bits when you modded them. I had one. The screen turned into a kaleidoscope when I tried to play a game.

Was definitely worth a try if you had one though!


And NVidia GeForce 6600GS to 6800GT with pencil mod and flash.


My preferred way of disabling Windows Defender is to boot Linux, mount windows partition and rename windows defender directories to *.disabled or whatever.

Example (assuming it is mounted at /mnt/ntfs):

mv "/mnt/ntfs/Program Files/Windows Defender" "/mnt/ntfs/Program Files/Windows Defender.disabled"

mv "/mnt/ntfs/Program Files (x86)/Windows Defender" "/mnt/ntfs/Program Files (x86)/Windows Defender.disabled"

mv "/mnt/ntfs/ProgramData/Microsoft/Windows Defender" "/mnt/ntfs/ProgramData/Microsoft/Windows Defender.disabled"

Antivirus service fails to start and that's about it, no other side effects.

To revert just rename back.

I have dual boot set up, but I believe the Ubuntu USB install image supports NTFS.


Windows also contains 3 drivers loaded during boot, all starting with wd*, especially wdboot.sys. If they are loaded, some paths to defender and registry keys are blocked. I always remove them from the custom ISO I use to install windows using dism.exe. You can also reboot into safe mode and rename them. After that, chipping away at defender using takeownership etc. works.

If you just rename the folders, those drivers are probably still active


Wouldn't windows' repair mechanism (dism/sfc) autofix this eventually?


Apparently not.

In my first attempt I've actually deleted the directories altogether but later wanted to scan a system manually and I couldn't repair the installation and get WD to run again.


Is there a reason this doesn't work from windows itself?


I've tried once but windows tries really hard not to let you do that.

My Windows kung-fu is rusty these days so the Linux method seemed neater.


What's happening here is when you boot to something else other than the Windows residing on your main NTFS volume, then your main Windows volume is inactive (its system files are not the ones running) so all those Windows files & folders are dormant, just like any other storage medium. So you can edit the filenames without them being in use at the time, and without your normal Windows processes interfering with the deed.

With Linux you have to be able to access the Windows files for this, and for years now Linux has been able to read & write to the NTFS filesystem decently.

In the Linux example this "disables" the entire Windows Defender folder and everything in it.

In addition to that however, contained in the WinSxS folder you can find some stragglers.

I'll add the belt & suspenders non-Linux equivalent for an up-to-date W11 pro system:

Boot to the Windows startup USB device, you will not select "Install Now" because that is not what is wanted at all. Instead click "Repair" my computer and progress to troubleshooting and the command prompt. This way the terminal CMD window is from a version of "MININT" running straight from a ramdisk in memory, identified as volume X:.

If you need a scratch pad type in "notepad" and it will pop up. Now you have access to your filesystem in "DOS" with a mouse if you need it.

For this disablement, keyboard can be enough in the CMD window, without having to paste lines from a more complex script opened as a text file in notepad. For manual typing though you'd have to type in each of the Rename commands one character at at time with perfection. So you'd probably like the pasting from prefabricated text files more likely.

All your regular Windows folders & files will still be on your main drive, and it will almost always still be identified as C:. Those files are just sitting there dormant and you are like the Trusted Installer, looking down from your perch on X:\.

You may already be just as powerful as Linux now.

Now this just disables the antivirus executable, not the entire folder (the firewall can be controlled from the GUI, but it's not the processor-hog the antivirus is). This is not for PC's in contact with the internet ! :

Rename "c:\program files\windows defender\msmpeng.exe" "c:\program files\windows defender\msmpeng.OFF"

Rename "c:\windows\winsxs\amd64_userexperience-desktop_31bf3856ad364e35_10.0.22xxx.xxxx_none_xxxx....xx\msmpeng.exe" "c:\windows\winsxs\amd64_userexperience-desktop_31bf3856ad364e35_10.0.22xxx.xxxx_none_xxxx....xx\msmpeng.OFF"

This second disablement can be the variable one, you need to look in the WinSxS folder in advance while Windows was still running in order to check the "x" values above for your particular build before you would know your particular exact complete WinSxS sub-foldernames. The publickeytoken of 31bf3856ad364e35 may also be subject to change in the future.

Or you can even browse for the target folder in Notepad and rename the file right there using notepad without even typing any commands into the CLI. Even though it would do no good to "open" the executable in notepad, start going through the motions as if you were going to open the EXE file, and you can at least change the name through notepad's limited open/save GUI interface.

If using the correct foldernames, type or paste those two commands into the terminal (one at a time, this is not powershell) and the target executables will be instantly renamed.

When you reboot to your regular Windows, it will not be able to find the msmpeng.exe file when it wants to run it after that. So no Windows antivirus running.

But it hasn't gone away, you can always rename it back to an EXE when you do need it later on.

Based on those two Rename commands you could also reverse the "manual" renaming procedure and effectively toggle the activity of msmpeng.exe, each time using two specific "lines of code" based on the above examples. I guess you could call them very simple scripts.

From what I understand gamers do things like this when PC's are not going on the internet.


Windows idiotic file permissions are almost impossible to manage, even if you know what you are doing


Seems like I used to be able to do stuff like this from a bash shell under cygwin, but I haven't really used Windows since the XP era.



Mandelbrot set calculation is a curious intersection of math and computer science. I went down that rabbit hole once :)

Naive implementation is easy, but there are many ways to improve performance.


Strange thing to say since CS is a sub-field of math.


I wouldn't really say that's a universally held notion, especially in the modern world. Historically, yes, but these days it stands more on its own. In similar vein, you could say biology is a subfield of physics, but most people don't think of them that way.


To a librarian, Computer Science and Library Science are just different aspects of information systems, which are (a librarian need hardly tell HN) humanity's highest calling, thus justifying their shared Dewey classification of 0.


Interesting! And "Science" is all the way down in Class 500.

https://en.wikipedia.org/wiki/List_of_Dewey_Decimal_classes


That's so nice, it brings a tear to the eye. I hope we live up to librarians' expectations.


It's not really philosophically, though. As a pure mathematician, I would not consider CS a subfield of math. The spirit of math is really studying abstract axiomatic systems, and while CS in theory is sort of like that, in actual practice, CS is too tied and focused on the actual instantiation of computers to really have the spirit of math.


Many would disagree with your characterisation of pure mathematics, computer science, or both. And I am certain that computer science yields nothing to pure mathematics in her pursuit of abstraction.


All I know is that when I read computer science, it doesn't feel like math at all. Other people can characterize it however they choose.


Most CS in practice is really software engineering


I think that statement held true before widespread availability of computing; when most computer science was really theoretical. I once skimmed a few chapters of a graduate-level textbook on Category Theory and realized that it was the foundation of object-oriented programming.

The biggest issue is that a lot of "Computer Science" is really applied software engineering; much like confusing physics and mechanical engineering.

Or, a different way to say it: Most students studying "Computer Science" really should be studying "Software Engineering."

More practically, I have a degree in Computer Science. When I was in school, it was clear that most of my professors couldn't program their way out of a paper bag, nor did they understand how to structure a large software program. It was the blind leading the blind.


Meanwhile at my ABET acredited, 2nd rate state school, my Computer Science college professors were talented people who had been doing this shit for decades and clearly understood what they were talking about.

I had ONE class that was about Software engineering.

Meanwhile I had an entire years worth of curriculum that was just "Go take various unrelated science and math classes so you have a strong understanding of the fundamentals in both science and math"

People so often generalize their very specific college experience to the entire world. Meanwhile you'd be lucky to find a consistent college experience just from crossing state lines.


All fields are subfields of polymathy.


I thought you're joking, but nope.

https://en.wikipedia.org/wiki/Polymath


Of course these terms aren’t well-defined, but some (me) would say both math and CS are subfields of “formal systems” (but “formal systems” could be named CS, which would put it at the top of the hierarchy)


The Joy computer language and Scheme with SICP are prime examples.

Reading SICP it's the 'easy way' and the Joy docs plus trying to code even simple as a quadratic eqn solver it's a deep, hard task which requires lots of knowledge on cathegory theory.


Software engineering has as much to do with sociology as it does with math.

Systems software has to deal with a lot of physics and engineering stuff (speed of light, power, heat, mean time to component failure, etc, etc.)


Software Engineering and Computer Science don't necessarily mean the same thing (even though at this point, most schools with CS programs just teach software engineering anyway).


…and I thought math branched off from computing science sometime in the 19th century. Before that, what was called „math“ was mostly algorithms to compute stuff.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: