Hacker Newsnew | past | comments | ask | show | jobs | submit | ThreatSystems's commentslogin

*Unless your in the cloud, then it's a metric to nickel and dime with throttling!

On a more serious note, the performance of hardware today is mind boggling from what we all encountered way back when. What I struggle to comprehend though is how some software (particularly Windows as an OS, instant messaging applications etc.) feel less performant now than they ever were.


The performance of hardware today is even more mind-boggling compared to what most people (SRE managers, devs, CTOs) are willing to pay for when it comes to cloud compute.

even more so when considered in the context of dev 'remote workstations'. I benchmarked perf on AWS instances that was at least 5x slower than an average m1 macbook, and cost hundreds of dollars a dev per month (easily), and the macbook was a sunk cost!


The answer, I suspect, is is the same as always: waiting for I/O in the GUI thread.

Both Telegram and FB messenger are snappy; I didn't use anything else seriously as of late. (Especially not Teams, nor the late Skype.)


> waiting for I/O in the GUI thread

The problem is sloppy programming. We knew how to make small, fast, programs 20+ years ago that would just scream on modern hardware. But now everything is bloated and slow. CPUs can retire billions of instructions per second. Discord takes 10+ seconds to open. I’m simply not creative enough to think up how to keep the cpu busy that long opening IRC.


Moore's law really help you with throughput, but latency still requires good engineering.

And you are right, that we got good UI latency even back in the 1980s. You just have make sure that you do the absolute minimum amount of work possible in the UI 'thread' and do gradual enhancement as more time passes.

As an example, the Geos word processor on the C64 does nice line breaks at the end of words only. But if you type really fast, it just wraps lines when you hit exactly x letters, and later when it has some time to catch up, it cleans up the line breaks.

That way it can give you a snappy user experience even on a comically underpowered system. But you can also see that the logic is much more complicated, than just implementing a single business logic for where line breaks should be. Complication means more bugs, more time spent writing and debugging and documenting etc.


They could be way faster. They're snappy enough but still, so slow.


FB messenger was so good, but they've killed it on both Windows and Mac and I'm sad about it :(

they are forcing me to use the web client...


CRTs get data to the screen faster. Some LCDs have 500ms delays.


What non-ancient LCD's have response times that high. Even e-ink/e-paper displays are better than that!


TVs can do a bunch of filtering which adds long latency based on the setting about the type of content (sorry, can't remember the exact term ATM).


That is true, but the worst offenders are about 300ms, and out of the 515 rtings have tested, only 5 have a worst case more than 200ms. A typical 'bad' LCD would be somewhere closer to 50-100ms usually.

https://www.rtings.com/tv/tests/inputs/input-lag


First, the oldest TV on their list is from 2020. Second, they didn't seem to test in the other "smoothing" input modes (because why would you if you're looking for low input latency as opposed to an uninformed consumer just using arbitrary settings?)

A CRT's latency starts at ZERO, depending on what's driving it and when the input is received ("racing the beam").


I'm building a training platform for cyber security and secure development practices. Mainly to address the junior to mid engineer gaps of "depth" across general cyber security and software engineering skills. I've encountered quite a few folk who have surface level knowledge but when having to problem solve production ready problems are blocked quite quickly, it's depth of knowledge which is missing. If anyone is interested reach out in a preview / providing some thoughts, reach out - my emails in my profile!


If you haven’t already, take a gander at the UK NCSC’s CyberEssentials materials / training.


Thanks! I actually wrote my master thesis surrounding cyber resilience, and the same week of submission they released their Cyber Action Toolkit (a precursor to Cyber Essentials). I was able to provide them some feedback which has since been incorporated so believe I am closely aligned to their recommendations!


Call me dumb - I'll take it! But if we really are trying to keep it simple simple...

Then you just query from event_receiver_svcX side, for events published > datetime and event_receiver_svcX = FALSE. Once read set to TRUE.

To mitigate too many active connections have a polling / backoff strategy and place a proxy infront of the actual database to proactively throttle where needed.

But event table:

| event_id | event_msg_src | event_msg | event_msg_published | event_receiver_svc1 | event_receiver_svc2 | event_receiver_svc3 |

|----------|---------------|---------------------|---------------------|---------------------|---------------------|---------------------|

| evt01 | svc1 | json_message_format | datetime | TRUE | TRUE | FALSE |


Cloudtrail events should be able to demonstrate WHAT created the EC2s. Off the top of my head I think it's the runinstance event.


I'm officially off of AWS so don't have any consoles to check against, but back on a laptop.

Based on docs and some of the concerns about this happening to someone else, I would probably start with the following:

1. Check who/what created those EC2s[0] using the console to query: eventSource:ec2.amazonaws.com eventName:RunInstances

2. Based on the userIdentity field, query the following actions.

3. Check if someone manually logged into Console (identity dependent) [1]: eventSource:signin.amazonaws.com userIdentity.type:[Root/IAMUser/AssumedRole/FederatedUser/AWSLambda] eventName:ConsoleLogin

4. Check if someone authenticated against Security Token Service (STS) [2]: eventSource:sts.amazonaws.com eventName:GetSessionToken

5. Check if someone used a valid STS Session to AssumeRole: eventSource:sts.amazonaws.com eventName:AssumeRole userIdentity.arn (or other identifier)

6. Check for any new IAM Roles/Accounts made for persistence: eventSource:iam.amazonaws.com (eventName:CreateUser OR eventName:DeleteUser)

7. Check if any already vulnerable IAM Roles/Accounts modified to be more permissive [3]: eventSource:iam.amazonaws.com (eventName:CreateRole OR eventName:DeleteRole OR eventName:AttachRolePolicy OR eventName:DetachRolePolicy)

8. Check for any access keys made [4][5]: eventSource:iam.amazonaws.com (eventName:CreateAccessKey OR eventName:DeleteAccessKey)

9. Check if any production / persistent EC2s have had their IAMInstanceProfile changed, to allow for a backdoor using EC2 permissions from a webshell/backdoor they could have placed on your public facing infra. [6]

etc. etc.

But if you have had a compromise based on initial investigations, probably worth while getting professional support to do a thorough audit of your environment.

[0] https://docs.aws.amazon.com/awscloudtrail/latest/userguide/c...

[1] https://docs.aws.amazon.com/awscloudtrail/latest/userguide/c...

[2] https://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-...

[3] https://docs.aws.amazon.com/awscloudtrail/latest/userguide/s...

[4] https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credenti...

[5] https://research.splunk.com/sources/0460f7da-3254-4d90-b8c0-...

[6] https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_R...


this is helpful. i will look for the logs.

Also some more observations below:

1) some 20 organisations were created within our Root all with email id with same domain (co.jp) 2) attacker had created multiple fargate templates 3) they created resources in 16-17 AWS regions 4) they requested to raise SES,WS Fargate Resource Rate Quota Change was requested, sage maker Notebook maintenance - we have no need of using these instances (recd an email from aws for all of this) 5) in some of the emails i started seeing a new name added (random name @outlook.com)


It does sound like you've been compromised by an outfit that has got automation to run these types of activities across compromised accounts. A Reddit post[0] from 3 years ago seems to indicate similar activities.

Do what you can to triage and see what's happened. But I would strongly recommend getting a professional outfit in ASAP to remediate (if you have insurance notify them of the incident as well - as often they'll be able to offer services to support in remediating), as well as, notify AWS that an incident has occurred.

[0] https://www.reddit.com/r/aws/comments/119admy/300k_bill_afte...


RunInstances


Cognisant US pricing for the HP Z Book Ultra was astronomical, within the EU it's on par with standard laptops and to good effect. The only regret I have is ordering on release day and not wanting to wait for the 128gb version; but battery life and power has remain unmatched to any of the pretty large workloads I have thrown at it!

Outside of laptops, Beelink and co. are making NUCs with them which are relatively affordable!

I do agree, the scarcity has limited their opportunity to assess the growth opportunity.


I also have one with 64gb — best laptop I've ever used :-). I have the same regret of not waiting for the 128gb version to be available before buying.


I am genuinely curious, as to how this would be a solution for a law practice? How many lawyers are SSH'd into servers? Or am I being ignorant?


As a non-lawyer who’s nonetheless been asked to help to review internal documents en masse - the idea of a fully scriptable <50ms switch time between documents is quite appealing. AI can help with initial screening, but there are many situations where humans are asked or required to do review at scale.


It doesn't have to be used over SSH, some lawyers might be comfortable using the terminal for local work


I hate Word but sometimes have to deal with it when I would rather just have plain text. (Among other reasons, Word is notorious for making it difficult to select text to copy and paste, especially when dealing with legal citations and quotations.) Furthermore, the structure of documents is important to understanding them, especially in the law. So it seems like it would be useful to work with the text of the documents without locking horns with M$.

Scripting uses interest me too. Perhaps pandoc will still be a better option, but I'm also a sucker for TUIs and _Charm projects!


This is what you’re looking for: https://tritium.legal/


I saw this on HN before, but how is it for litigation?


I'm working to improve the copy/paste. Right now, you can copy everything, but not select snippets to copy/paste (ways around this, though). Hopefully have it working in the next week!


Vagrant / Packer?


Wouldn't work here, they have software on each VM that cannot be reimaged. To use Packer properly, you should treat like you do stateless pod, just start a new one and take down the old one.


Sure then throw Ansible over the top for configuration/change management. Packer gives you a solid base for repeatable deployments. Their model was to ensure that data stays within the VM which a deployed AMI made from Packer would suit the bill quite nicely. If they need to do per client configuration then ansible or even AWS SSM could fit the bill there once EC2 instance is deployed.

For data sustainment if they need to upgrade / replace VMs, have a secondary EBS (volume) mounted which solely stores persistent data for the account.


That might work as well. I've found Packer + Ansible to be juice not really worth the squeeze vs base Ubuntu/Debian/Rocky + bigger Ansible playbook.


With all the mind share that terraform gets you would thing vagrant would at least be known but alas


Somebody educate me about the problem Packer would solve for you in 2024?


Making machine images. AWS calls them AMIs. Whatever your platform, that's what it's there for. It's often combined with Ansible, and basically runs like this:

1. Start a base image of Debian / Ubuntu / whatever – this is often done with Terraform.

2. Packer types a boot command after power-on to configure whatever you'd like

3. Packer manages the installation; with Debian and its derivatives, this is done mostly through the arcane language of preseed [0]

4. As a last step, a pre-configured SSH password is set, then the new base VM reboots

5. Ansible detects SSH becoming available, and takes over to do whatever you'd like.

6. Shut down the VM, and create clones as desired. Manage ongoing config in a variety of ways – rolling out a new VM for any change, continuing with Ansible, shifting to Puppet, etc.

[0]: https://wiki.debian.org/DebianInstaller/Preseed


This is nice in its uniformity (same tool works for any distro that has an existing AMI to work with), but it's insanely slow compared to just putting a rootfs together and uploading it as an image.

I think I'd usually rather just use whatever distro-specific tools for putting together a li'l chroot (e.g., debootstrap, pacstrap, whatever) and building a suitable rootfs in there, then finish it up with amazon-ec2-ami-tools or euca2ools or whatever and upload directly. The pace of iteration with Packer is just really painful for me.


I haven’t played with chroot since Gentoo (which for me, was quite a while ago), so I may be incorrect, but isn’t that approach more limited in its customization? As in, you can install some packages, but if you wanted to add other repos, configure 3rd party software, etc. you’re out of luck.


Nah you can add other repos in a chroot! The only thing you can't really do afaik is test running a different kernel; for that you've got to actually boot into the system.

If you dual-boot multiple Linux systems you can still administer any of the ones you're not currently running via chroot at any time, and that works fine whether you've got third-party repositories or not. A chroot is also what you'd use to reinstall the bootloader on a system where Windows has nuked the MBR or the EFI vars or whatever.

There might be some edge cases like software that requires a physical hardware token to be installed for licensing purposes is very aggressive, so it might also try to check if it's running in a chroot, container, or VM and refuse to play nice or something like that. But generally you can do basically anything in a chroot that you might do in a local container, and 99% of what you might do in a local VM.


I miss saltstack. I did that whole litany of steps with one tool plus preseed.


Saltstack is still around!


I think the thread is more about how docker was a reaction to the vagrant/packer ecosystem that was deemed overweight but was in many ways was a “docker like thing” but VMs.


Oh, yeah, I'm not trying to prosecute, I've just always been Packer-curious.


What's a better way to make VM images?


There's lots of tools in this space. I work on https://github.com/systemd/mkosi for example.


If you want to integrate this into Windows AD look at ADFS[1] and MSAL[2]. Pretty much can give you OIDC from AD, but you'll have to deal with Microsoft licencing :D.

[1] https://learn.microsoft.com/en-us/windows-server/identity/ad... [2] https://learn.microsoft.com/en-us/entra/identity-platform/ms...


Up to


Would the easiest route for text be too make your own WebFont in both serif / san-serif which are "blurred/pixelated" to over ride font-family tags in CSS? Then do the remainder of the image blurring in the technique already used?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: