Hacker Newsnew | past | comments | ask | show | jobs | submit | adolph's commentslogin

  grep 'name = ' ms-litebox-Cargo.lock | wc -l
     238
edit:

  grep 'name = ' ms-litebox-Cargo.lock | sort -u | wc -l
     221

I've always done 'sort | uniq'. Never bothered to check for the the unique flag to sort. Although 'uniq -c' is quite nice to have.

       -c, --count
              prefix lines by the number of occurrences

Yeah, to see the packages with multiple versions:

  grep 'name = ' ms-litebox-Cargo.lock | sort | uniq -c | grep -v '1 name' | sort -n
Package windows-sys has the highest number of versions included, 3: 0.59.0, 0.60.2, and 0.61.2.

Edit: Also, beware of the unsorted uniq count:

  cat <<EOF | uniq -c
  > a
  > a
  > b
  > a
  > a
  > EOF
   2 a
   1 b
   2 a

grep -v '1 name' excludes 11, 21, etc., but I take your point.

  >> power attracts the corrupt
  >  hence why limited terms are a prerequisite for functioning democracies.
The practical effect of limited terms is a set of hapless electeds who depend on the kindness of lobbyists or other stakeholders to perform core duties, such as write effective legislation. In terms of the Gervais Principle [0], the sociopaths move from elected to lobby (which is a natural career progression already) and emplace more of the clueless as elected officials.

But if you want to take Vienna, take Vienna! Embrace limited power

Limited government power is often rightfully challenged as being unbalanced to the tremendous power of non-government entities such as corporations. However, this claim elides that the power and charter of any particular entity is downstream of what is granted and enabled by government functions. Less government power makes for less powerful corporations.

However, once everything is cut down a few notches, will the remaining power still attract the "corrupt?" Yes, power, status and other social markers will still exist and act like a bug lamp for sociopaths. But on the plus side they won't be as able, as you say, "to do that much damage."

0. https://www.ribbonfarm.com/2009/10/07/the-gervais-principle-...


> a set of hapless electeds who depend on the kindness of lobbyists or other stakeholders to perform core duties

You have this already without term limits. An elected officeholder is given more than enough resources to be enabled to perform her duties, if she wants to. It's a matter of willingness, term limits aren't making things worse than they might otherwise be.


> term limits aren't making things worse than they might otherwise be.

I disagree. Term limits make politicians unaccountable to their constituents and thereby more open to bribes from lobbyists. If they know they can't seek reelection no matter what, they have no motivation not to accept a bribe or disregard everything they campaigned on. On the other hand, when politicians don't have term limits, they must at least worry about their next election campaign and whether the things they're doing right now will ruin their chances at being elected again.

Note: when I say accept a bribe I'm talking about being wined, dined and lobbied by lobbyists, not literally accepting bribes that would get them thrown in jail.


> certificate authority logs, which are actively monitored by vulnerability scanners

That sounds like a large kick-me sign taped to every new service. Reading how certificate transparency (CT) works leads me to think that there was a missed opportunity to publish hashes to the logs instead of the actual certificate data. That way a browser performing a certificate check can verify in CT, but a spammer can't monitor CT for new domains.

https://certificate.transparency.dev/howctworks/


I think it was more of an intentional tradeoff, as one of the many goals of CT logs was to allow domain owners to discover certificates issued for their domains, or more generally for any interested party to audit the activity of a certificate authority.

What you're describing there is certificate... translucency, I guess?


Yes, "translucent database" was exactly the concept I thought of when asking the question. The concept is keep access to specific items easy but accessing the entire thing as a whole more costly.

Yeah, I think the major improvement of cloud services was the rationalization of them into services with a cost instead of "ask that person for a whatsit" and "hopefully the associate goomba will approve."

All teams will henceforth expose their data and functionality through service interfaces

https://gist.github.com/chitchcock/1281611


> I've read some stuff which says the cost of 5 SBC boards with pre-applied SMD is now so low, you might as well order 5 so you get at least 1 which works. That means they will wind up working out your tolerance for failure, and produce goods to meet that: if 1 in 5 is viable, thats what they'll target.

That is very rational. Each 9 in uptime or quality represents expense. The expense of moving to the next level up can't always be "shift left", but instead done at the point in the process where money can be applied.

Lets say you have a process that goes right at a minimum of 20% of the time for cost of 100. The manufacturer can add QA that makes the cost 120. Is it better to trust the manufacturer at the cost of +20? Or is it better to do your own QA for 20 and gain any correct pieces above the 20%?


The purpose is to allow users access by ldap criteria like group so the sodoers file need not be edited on each and every server.

https://www.sudo.ws/docs/man/sudoers.ldap.man/


Yeah, that’s not something I would expect a core until to do.

I would expect another system to query ldap.


They are using github sponsors and have had some level of contributions.

https://github.com/sponsors/sudo-project


My go-to for fast and easy parallelization is xargs -P.

  find a-bunch-of-files | xargs -P 10 do-something-with-a-file

       -P max-procs
       --max-procs=max-procs
              Run up to max-procs processes at a time; the default is 1.
              If max-procs is 0, xargs will run as many processes as
              possible at a time.

note that one should use -print0 and -0 for safety

Thanks! I've been using the -F{} do-something-tofile "{}" approach which is also handy for times in which the input is one pram among others. -0 is much faster.

Edit: Looks like when doing file-by-file -F{} is still needed:

  # find tmp -type f | xargs -0 ls
  ls: cannot access 'tmp/b file.md'$'\n''tmp/a file.md'$'\n''tmp/c file.md'$'\n': No such file or directory

You have to do `find ... -print0` so find also uses \0 as the separator.

find -print0 will print the files with null bytes as separators

xargs -0 will use a null byte as separator for each argument

printf 'a\0b\0c\0' | xargs -tI{} echo “file -> {}"


Aspera's FASP [0] is very neat. One drawback to it is that the TCP stuff not being done the traditional way must be done on CPU. Say if one packet is missing or if packets are sent out of order, the Aspera client fixes those instead of all that being done as TCP.

As I understand it, this is also the approach of WEKA.io [1]. Another approach is RDMA [2] used by storage systems like Vast which pushes those order and resend tasks to NICs that support RDMA so that applications can read and write directly to the network instead of to system buffers.

0. https://en.wikipedia.org/wiki/Fast_and_Secure_Protocol

1. https://docs.weka.io/weka-system-overview/weka-client-and-mo...

2. https://en.wikipedia.org/wiki/Remote_direct_memory_access


FASP uses forward error correction instead of retransmission. So instead of waiting for something not to show up on the other end and sending it again, it calculates parity and transmits slightly more data up front, with enough redundancy that the receiving end is capable of reconstructing any missing bits. This is basically how all storage systems work, not just Weka. You calculate enough parity bits to be able to reconstruct the missing data when a drive fails. The more disks you have, the smaller the parity overhead is. Object storage like S3 does this on a massive scale. With a network transfer you typically only need a few percent, unless it's really lossy like Wifi, in which case standards like 802.11n are doing FEC for you to reduce retransmissions at the TCP layer.

In RDMA are the NICs able to perform the reconstruction or does that use a different mechanism to avoid CPU?

Usually RDMA is over a network that is supposed to be lossless, but it does have checksums to detect corruption and recovers with retransmission. Infiniband NICs handle all of that.

Yes, in addition to inserts which are custom to the mask, there are small frames with a thin rubber band to keep them on you. They were great for roughhousing occasions as well as under mask. The key part is the thinness and impermeability of the band allows for a good seal.

https://eyeglass.com/products/criss-optical-collection-mag-1...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: