Hacker News new | past | comments | ask | show | jobs | submit | 66fm472tjy7's comments login

I cannot confirm this behavior. With a router running FRITZ!OS:7.57 configured to use Google's DNS (in the router only) I get the following on Windows 10

  > nslookup google.com
  Server:  fritz.box
  Address:  fd00::[redacted]

  Non-authoritative answer:
  Name:    google.com
  Addresses:  2a00:1450:4001:828::200e
            142.250.181.238
Update: the connection does have the DNS suffix, so according to the superuser answer linked in OP (which is the first result when looking up what a DNS suffix is), it should get appended to lookups on windows, but it looks like it isn't in my case.

  > ipconfig
  [...]
  Connection-specific DNS Suffix  . : fritz.box


I noticed a lot of comments explaining the same thing, but also some confirming my observations. So far it appears the issue is mostly present if you have configured a separate DNS server in the DHCP settings, e.g. DNS is resolved somewhere outside the Fritz!Box. I will investigate further and update the article.


> A company should have every right to deny service

Plenty of utility companies are already being forced to provide service. As the EDPB opinion says, a lot of big tech is

> decisive for participation in social life or access to professional networks, even more so in the presence of lock-in or network effects

> Untargeted advertising pays 90+% less than targeted advertising

I don't think that such a large difference is rational. "Untargeted" advertising can still be based on the content being viewed, just not on surveilling the viewer.

> If they lose money by serving a user content because they denied targeted advertising, they should be able to deny them service or have them pay up.

As I understand it the opinion does not categorically rule this out

> Controllers should ensure that the fee is not such as to inhibit data subjects from making a genuine choice

That is why NOYB also focuses[0] on the fact the the fee is disproportionate:

> The current average revenue for programmatic advertising in the EU is € [1.41] per user - across all websites per month [...] visiting the top 100 websites can already cost more than € [1500] per year if you do not consent to tracking

---

[0] https://noyb.eu/en/statement-edpb-pay-or-okay-opinion, https://weis2019.econinfosec.org/wp-content/uploads/sites/6/...


I feel like commenters in this thread are talking past each other.

Some are saying "of course sites are still tracking you in incognito!", but is is unclear to me what they mean by this. I see the following interpretations:

1. Sites can still use local storage so they can track you for the duration of your incognito session, but they cannot connect this tracking to your regular session or other incognito sessions as incognito sessions start with empty local storage and discard it at the end of the session.

2. Sites do not rely on local storage, instead using fingerprinting via a combination of IP, HTTP headers, information they can query via JS, etc., so incognito has no effect on sites' ability to track you.

3. Google has special privileges in Chrome to track you when you are incognito.


> I feel like commenters in this thread are talking past each other.

Agree.

The point for me is that Google could track you anyway which is beyond the scope of their message to the (average) user.


> It is possible. Don't give them money and only provide basic shelter

In the EU you would have to reduce your welfare state to that level for your own citizens as well. The ECJ says[0]:

> It follows that the level of social security benefits paid to refugees by the Member State which granted that status, whether temporary or permanent, must be the same as that offered to nationals of that Member State

[0] https://curia.europa.eu/juris/document/document.jsf?text=&do...


Then maybe refuse to register them, like the Hungarian authorities do? Understaff and reduce migrant registration offices to a bare minimum like the French are doing with immigration offices.

Hungary and Poland have repeatedly boycotted and vetoed the EU's migration policies. If this gets support from a country like NL, there is hope to change the EU's ill suited Merkel/Juncker policies on migration.

The alternative is right wing populists like Wilders winning elections in more EU states. Consider LePen's FN winning the French elections for instance. In the German government there is already consensus on a stricter migration policy in order to stem support for the far right.

Russia is also using migrants to put pressure on Polish, the Baltics' and Finnish borders. We shouldn't allow this.


In my experience, people are sensitive to different aspects/weaknesses in game graphics. For instance, I don't really notice any difference between 60 and 120 FPS. I am also not very bothered by traversal stutter.

What I AM sensitive to however, is temporal instability - it just draws my attention and hurts immersion. Here DLSS makes a huge difference, as shown here[0].

Therefore it is sad that Bethesda chose[1] to deliver worse than possible image quality for 80%+ of their PC customers[2].

----

[0] https://youtu.be/ciOFwUBTs5s?feature=shared&t=336

[1] https://news.ycombinator.com/item?id=37452149

[2] https://archive.ph/mqPLK, nvidia has 75% market share here, but you have to look at the higher end parts only and exclude Intel as Starfield does not run at all on their GPUs[3]

[3] https://in.ign.com/starfield/193351/news/starfield-intel-fin...


I am not optimistic that the de-facto end of general computation can be prevented, or that there will even be noteworthy opposition.

There are so many powerful interests that stand to gain from preventing e.g. ad-blocking and content capture. Thanks to Windows 11 requiring TPM, it is just a matter of time until hardware support for remote attestation is ubiquitous even on desktop computers.

Meanwhile, our (including myself) attention is (perhaps justifiably to some extent) on the latest news about $EXISTENTIAL_THREAT and how $THE_OTHER_SIDE did $EVIL_THING fed to us by the algorithm. Organizations that used to effectively fight threats to freedom like this (FSF, pirate parties, CCC, EFF, etc) have lost a lot of their support/influence and clarity of purpose over the last decade.


> Everything should be a drop-in replacement

This is not true for many applications. Due to the removal of many APIs from the JDK with Java 9, I needed the following dependency artifactIds to be able to move a JEE application with SOAP web services to Java 11: jaxb-api, jaxb-core, jaxb-runtime, istack-commons-runtime, jboss-jaxws-api_2.2_spec, glassfish-corba-omgapi, jboss-annotations-api_1.2_spec, activation, jboss-saaj-api_1.3_spec, saaj-impl, stax-ex, jsr181-api, txw2.

Many of these spec API/implementations are provided by different artifacts that are incompatible with each other. Some I only discovered when something failed at runtime as they perform implementation lookups and you don't get compile errors.

Additionally, many of the Maven plugins we used no longer worked and our application server failed to start.


Are you really actually using CORBA? That would be heartwarming if so. Maybe just RMI over IIOP?


It's far less exciting I'm afraid: we use [0] to generate our DB IDs and it implements org.omg.CORBA.portable.IDLEntity.

We could fork it and remove the interface or switch to ULID[1] instead.

[0] https://github.com/stephenc/eaio-uuid/blob/master/src/main/j...

[1] https://github.com/ulid/spec


Shame :(

Given that interface is trivial, you could also just define it in your codebase. I've done that a few times for shimming small bits of log4j and Spring that some library uses, when i would rather not have those as a dependency.


> We need this in our corporate client device fleet to counter specific threats

Can you please expand on what you verify via remote attestation and against which attack vectors this protects you?

Does this protect you against the usual attack vectors of your employees logging in on phishing sites, downloading malware, running office macros etc? Stealing your data usually does not need any root/kernel access.


We use RMQ for most of our asynchronous processing. In most cases, we get a HTTP call and publish a message to the RMQ after committing the DB transaction, then we send the response to the HTTP client.

We found out the hard way that RMQ does not behave like a transactional DB. Just because publishing worked does not mean the message will be delivered.

Our solution is to also write the message into an outbox table in the DB. We then publish the message using confirms[0]. RMQ asynchronously sends us a confirmation when it has really persisted the message. We then delete the outbox entry. If we do not receive the confirmation in time, a timer will re-publish the message.

Therefore I disagree with the suggestion of using a library wrapping the native RMQ one. We are using spring-amqp and this made it harder to understand what is going on. In the end, for a large project you will have to understand nuances of RMQ (and other infrastructure you are using). Using a leaky abstraction over it means you now have to understand both the underlying product and the abstraction.

[0] https://www.rabbitmq.com/confirms.html#publisher-confirms


I agree. The pattern I've seen more than once is is:

1) Adopt RabbitMQ without any experts on the team 2) Conceal Rabbit/AMQP functionality as much as possible behind a simplifying abstraction, often in multiple layers, often written by non-experts 3) Run into some intractable reliability or scaling problem 4) Have no idea how to solve it because you still don't have any experts 5) Throw a lot of money at the problem, fail 6) Decide to do a very expensive migration to a different system (SNS+SQS, Kafka, etc.)

At that point, you go back to step 1. If you're lucky, somebody has expertise in the new system and the migration can be pulled off successfully. Otherwise, you either end up repeating the whole process or everything goes off the rails when you're halfway migrated to the new system.

This same process happens for all kinds of stuff, not just RabbitMQ, of course.


Kafka is at least a bit simpler than RabbitMQ, though both are very square-ish shaped pegs and are usually forced into very round-ish holes. People regularly think they need a message queue, when they really need a job queue, or message bus. Or even all three, but then they try to hack everything on top of Kafka (or RMQ) ... which can be done, of course, but "results may vary".

For tracking state at scale (but still per-job, per-thing) a Cassandra-like system works best (but preferably a better implementation, eg. SkyllaDB or AeroSpike or some other KV store).


> People regularly think they need a message queue, when they really need a job queue, or message bus

This is one of the reasons I am a really, really big fan of Google Cloud's Task Queues. It allows the stupidest, simplest temporal execution of HTTP invocations.

Currently working on a project in AWS and it's stunning how complicated it is to achieve the same simple need of "I want to execute this HTTP call at this time in the future". It's either AmazonMQ -- using either ActiveMQ or RabbitMQ with plugins -- or hacking around SQS's 15 minute delay limit. In our case, we are going to end up wrapping our messages in an envelope with a delivery time and if it hasn't met the delivery time, we put it back into SQS.

GCP is highly underrated for how it simplifies control over execution of code. Pub/Sub and Task Queues both have HTTP delivery built in. Couple that with Google Cloud Run and it is a recipe for building almost any type of execution model with much less complexity and overhead


In case you want to avoid an extra envelope, you can also add custom headers to SQS messages. This can be handy if you want to implement that delay hack without parsing message bodies.


Lol to “ Kafka is at least a bit simpler than RabbitMQ”. I’m sorry, what universe does this statement live in?


True, it’s actually a lot simpler than RabbitMQ. People seem to assume “giant ball of enterprisey Java means” that the experience using it will be complicated. Kafka is extremely reliable, simple to cluster, battle tested (I haven’t hit an actual bug in Kafka in ages), the self-healing is turnkey, and has way stronger guarantees for clients.

Where it bites people is that it’s not a queue and scaling is harder than just add more consumers.


I am curious if anyone has used confluent.io (kafka as a service). Or is it too cost prohibitive?


Java is a bit simpler than erlang


I looked it up but couldn't really find information, what would you say is the difference between a "message queue", a "job queue" and a "message bus"?


I just throw them on the wall based on my experiences, maybe they have some agreed upon precise definitions, but I'm not aware :o

message bus: firehose of events, (by default) no ACK. usually multi producer multi consumer. (see also DBus which is more of an RPC layer + service discovery + pub/sub via event listeners)

message queue: usually between components, ACK, but no selective ACK, backpressure, might even have support for "dead letters" (letters not ACKed by any consumer)

job queue: selective ACK, retry, etc.

(there's also the "enterprise service bus", which is similar, but mostly implemented on things like IBM MQ)


I ran into the exact same problem as the author and I fixed it by...reading the documentation.

In my case, a whole team of devs was using RMQ without knowing anything about it. Literally caused many sev-1 occurrences over the years until I resolved all of the issues. It took a datacenter migration that allowed me the opportunity to redesign the entire RMQ infrastructure before I was able to put the whole mess to bed.


Sounds exactly like the story for many adopting noSql: adopt RDBMS with no experts, hide it behind an ORM, run into scaling and performance problems, no idea how to solve it, scale the hardware vertically, move to NoSQL.


I’m currently wrestling with a thing at work where someone wrapped a frameworkish library with their own abstraction. Now I’m trying to add a cross-cutting concern that neither my coworker nor the authors thought about, and so instead of punching through three layers of inadequate data passing I’ve got six to deal with and a stutter as well (builder patterns are great, except when they are not).

Having this new failure mode added to all the other ones I’ve already met over the last few decades has colored my perception a bit, and I’m having opinions about how you shouldn’t try to wrap a wrapper, and maybe the best way to live with a bad API is to pass through the yucky bit as quickly as possible - preprocess to see if you can avoid calling it at all, and then avoid asking it to do anything extra the rest of the time.

That part doesn’t feel that transformative to me but maybe I’m wrong. What’s bigger and stickier for me is that I now have to think about some NIH code we wrote that deeply bothers me, and decide if I still don’t like it, or if the author had the same conclusion and this was their answer.


If you have to write all messages to the DB, why use RMQ at all and not just read the messages from the DB?


How quickly does RMQ ack the message? Obviously too long to delay an HTTP response, or you’d have skipped the DB part of this; but this seems kind of clunky. I know Kafka has (optional, tunable) acknowledgements for publication, for example, that you could use for this.


In the first iteration of using confirms, we did not have the outbox but only logged how long it took to get the confirmation. After 3 seconds, we would throw out the expected confirmation. If a confirmation took longer than that, we would log that we received an unknown confirmation.

We hoped it would be fast enough that we can just wait for the confirmation before committing the transaction.

The official documentation says

> This means that under a constant load, latency for basic.ack can reach a few hundred milliseconds

I never did statistics, just looked at the log. IIRC most were acceptable but > 3s occurred frequently enough (and we even had instances of messages never being confirmed, IIRC) that we abandoned that plan.

We considered using Debezium[0], but decided on the current solution as it could be solved entirely with the current services and infrastructure whereas Debezium would have required us to deploy (writing this from memory so this might be inaccurate/incomplete) Kafka, Zookeeper, and a connector service.

[0] https://debezium.io/


Yep, Debezium is built on Kafka Connect, and yeah, it expects a Kafka cluster to talk to, which will have ZK present for maintaining cluster state.


Kafka has shipped the long-awaited ZooKeeper-free mode, but AFAIK it’s still beta and behind feature flags on the producer, broker, and consumer (like almost all Kafka config :( but that’s another story)


Yeah, it's shipped, but it's missing some existing ZK features that tooling around Kafka relied on, and I'm a bit embarrassed for Confluent that they pushed KRaft so hard without a replacement.

E.g. the ability to watch a ZK node for changes, which means in Kafka sans ZK, you can't detect changes to topics without continuously polling via the admin client.

A coworker is working to implement something like this for KRaft, but it really demonstrates how an IPO can cause a company that was the steward of a FOSS project to do things detrimental to that project to keep the share price up. (Was also interesting how many key Confluent people left right after the IPO)

The other very notable change is how Confluent's dev effort has switched from the open source project to the Enterprise Edition, but they still have the majority of PMC members, while not having the corporate blessing to spend time reviewing PRs.


> you can't detect changes to topics without continuously polling via the admin client.

Yikes, that sounds like an oversight! Aren't topic configs written to a system topic that you could consume from?


They were supposed to be, in the original KIP, but that changed, I'm unclear as to what drove that change.

So my coworker's solution joins the KRaft quorum as an observer, then publishes metadata changes to a topic you can consume from.


Sorry, realised I'd lapsed into jargon and missed the edit.

KIP = Kafka Improvement Proposal. The mechanism for proposing big changes to Kafka and getting community feedback before core committers vote on adoption or not.

I'm not sure why Zookeeper is viewed so negatively in regards to Kafka, it's damn solid, and if I can quote Jepsen "Use Zookeeper". I know Confluent wanted to replace it because it struggled when you hit thousands of topics on a single cluster, but that feels like a very niche use case to me.


That sounds like a handy project!

I agree re: zookeeper. It’s rarely been the part of the stack making me lose sleep. ~~Raft~~ seems like a way to “modernize” Kafka by detaching it from the Hadoop ecosystem — and I think that’s about it. :(


Kafka's acks aren't between consumer / producer, or consumer/ cluster, it's solely between producer and cluster.

It's one of Kafka's strengths.


It depends on the current throughput of the system, how many queues a message is routed to, size of the message etc. But a mostly idle RabbitMQ cluster with fast disks should confirm a message published to a single quorum queue in a couple of ms.


Using MQTT, my Sonoff with Tasmota on it, as soon as it gets a message to switch, it will reply with it's current state. Seems simple enough?


> and there's absolutely nothing that can be done to stop them

Congress could pass laws saying that the EPA can regulate greenhouse gas emissions, that states cannot forbid abortions, etc.

If you think that these rulings are not plausible interpretations of the law, Congress can even define the size of the court[0]. They could pack the court with judges who will interpret the law in their favor.

It is my understanding that the Democratic Party that holds the majority in both chambers claims to be in favor of these policies, so why aren't they taking action?

[0] https://en.wikipedia.org/wiki/Judiciary_Act_of_1869


It's difficult when the balance of power is skewed so heavily to the right.

CA senator represents ~10000000

WY senator represents ~200000


That determines who has control. However, the Democrats currently have control despite this, if only barely.

So the question remains - why don't they just do what the court has said they should, and pass a law?


See my answer above. It comes down to the filibuster. That's a procedural rule making it so that the Democrats do not, in fact, currently have control. They have to find 10 Republicans to "come across the aisle" in order to get legislation on the floor for a vote. Getting those Senators to come across the aisle is known as politicking. That's just the way the system works.

Democrats do have the power to change those procedural rules, but there's huge downsides in doing so.

Bottom line - don't look for easy solutions to complex political problems. They're likely to be loaded with unintended consequences.


Senate rules. Having a simple majority in the Senate doesn't mean you can ram-rod through your agenda. You need 60 votes for legislation to reach the floor for and up and down vote. The Democrats theoretically have 51 (including Kamala!), and a couple of them haven't been too reliable. So no, the Democrats can't do "anything they want."

They could end the filibuster - but that would likely backfire. End the filibuster now right before Summer recess and the Fall election cycle where not much gets done? Probably not a wise move, especially if the Republicans regain control of the senate and now you've just handed them the reigns of unbridled power. Probably not a good move. That's why the filibuster stays.

Our problem really isn't so much that our Congress is deadlocked. Heck, I argue that deadlock is actually a preferable state. Otherwise these bozos would be sowing chaos on a daily basis. The game that's changed is using procedural rules to steal supreme court justice nominations from one president and giving them to another. Trump seized that opportunity and to maximize his legacy and influence chose the youngest and most controversial judges he could get through the nominations.

I know many liberals who refused to vote for Hillary Clinton in 2016. I tried to explain to them, to no avail, what would happen with regards to the Supreme Court should Trump win. I was pooh-poohed. Repeatedly told I was over-sensationalizing things. I've since received apologies from many of them but who cares? The damage has been done and will continue for much of the remainder of our lives.

There is a way out of this mess, but right now America is too polarized for that solution to be viable. Supreme Court justices can be impeached and removed from the bench - it's even been done before. But you need 2/3 of the Senate onboard to do it. I don't realistically see that happening any time for the next 20 years, if ever, and by then the damage wrought will be so severe that I'm afraid America will be unrecognizable.

Bottom line - America was already in decline. This Supreme Court is just going to accelerate that decline. If anything they might jolt Democrats out of their complacency and turning their noses up at any candidate they don't think is absolutely perfect. Who knows? Maybe something good will come out of this after all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: