The US has a very weak legal deposit scheme compared to e.g. the UK. IIRC, legal deposit is only required where the author applies for copyright registration, so it’s extremely unlikely that a newspaper would be subject to the legal deposit scheme.
Ah! So step 2 is wait for the spammers to automate blacklisting of Daisy phone numbers, and only then start rolling out a (paid) Daisy option to customers.
Not connecting calls doesn't waste spammer money, but maybe Daisy does.
If the big telco can find 10 righteous callers from a a bad actor telecom, they should keep routing the calls.
Then, once the spammers have blacklisted the Daisy numbers, cycle those spam-free numbers to their customers and start a new batch of Daisy numbers. This way, there is a constant flow of spammer free numbers being cycled into the pool. Of course, everyone and their dog wants your phone number, so you will have to be careful who you give it to if you want it to stay spam-free.
As long as the scammer's paying to route the call, I'm ok with this. And the telcos' fitness function for their pool of robogrannies should be time-spent-on-call. Making it uneconomic is the way to kill it.
My friend works for a big telco and is the guy fixing this problem for them. They have amazing powers of deception when they need it. New numbers can be conjured up at any time.
The new fad among wireless carriers here in the US is to route what they think are spam calls to a fake voicemail box.
Voicemail that is left in this generic voice mail box never makes it to their customer and the customer is completely unaware that some of their calls have been diverted.
Then suddenly, calls from consenting callers to consenting receivers are labeled as spam and blocked. What can you do about it? Nothing. Switch to email, I guess. Oh wait, same problem.
You sound like an advocate for telemarketers. Am I correct?
I doubt very seriously that the pool of people who have knowingly and intentionally and explicitly opted in/consented to telemarketing - that is, without any dark pattern involvement and with a clear and unmbiguous consent experience, is very large. In fact I think it is infinitesimal because I can’t recall seeing such a consent UX- they ALL involve dark patterns. And if you pair that with “marketer who diligently implements all state & FTC requirements and does timely and accurate processing of removal requests, I think that the 3 relationships left are web app UX testers.
I think the world would be a better place without telemarketing or email marketing. Maybe a “one email per year” limit per merchant who you have actually paid money to and not opted out of.
I’m not OP, but my worry is about the false positives. I have real inbound calls and emails getting detected as spam all the time. Luckily my VoIP provider has a spam box I can look in, but at this point I just have to go through them every so often to make sure I’m not missing anything important.
If the telecoms can perfectly predict the telemarketers, then I’d love it. But in practice how often is this going to block people I know from calling me? Probably not never, and then we just have to give up on phones as a reliable method of communication.
Exactly. Many people want to be able to receive phone calls from their doctors, airlines and schools. These types of B2C calls are presumably most likely to be marked as spam in the event of false positives.
I’ll take my chances. 99%of the people I want to talk to either email/text me first or are already in my contacts list (which I’m not really all that picky about). I’ll accept that failure rate.
Isn't "emailing you first" just kicking the can down the road? What stops spam emails getting through to you? (Besides the exact kind of heuristic filtering you seem to be objecting to, that is.)
Disneyland will have a handful of IPs and phone numbers, and I'd bet my hat will have a team aggressively calling any ISP or provider that flags them as spam.
Bulk scams by mail are at least less common because mail fraud is investigated pretty seriously and results in federal felony charges. Not to mention the cost of initiation is much higher. Unfortunately individuals are still sometimes targeted.
> Not to mention the cost of initiation is much higher
This is the thing we screwed up for email and phone (after per call fees dropped to zero).
It's not rocket science to create systems that net to zero for common usage (balanced in-bound vs out-bound), but charge an arm and a leg for bulk senders.
Until you're running a file server or the equivalent. There has to be some way for a willing recipient to zero-rate or reverse-charge the responses to their requests. The Internet gets this wrong.
The physical mail spammers know to only use deceptive tricks, like "FINAL NOTICE" or pretending to be affiliated with you using some publicly available information. I have not yet seen one dare to full-on lie, because there would be real consequences.
If a scammer puts "FINAL NOTICE" on a solicitation they mailed with no prior relationship, I do still report it as fraud. But that's probably wishful thinking.
Better yet, route all calls for all disconnected/unassigned numbers in their part of the numbering plan to it. It would probably kill robocalling overnight.
I hate to say this.. but I find this very difficult to believe..
I don't think any telco puts effort into stopping spammers.. I'd like them to but I don't think it's something they either can care or legally capable of fixing.
I work for a telco, though not in that department. We put a lot of effort into trying to block spam calls, and adapting systems to the newest tricks. The reason why the results aren't better is (I'm being told) a combination between IP telephony making reliable source tracing all but impossible, and common carrier laws which mean that you can't block a call unless you're 100% certain it's a scam, otherwise you open yourself up to being sued.
My understanding is that the crush risk at Euston is entirely an operational issue of Network Rail's making (NR being the station facility owner), by deliberately not announcing platforms until the last moment, causing passengers to run to the platform en masse. If platforms were announced earlier, the crush risk would be seriously mitigated.
The obvious next question is whether platforms _can_ be announced earlier - to which the answer is, as I understand it, yes. The platforms are known about much further in advance and the reason for the delay appears to be a combination of intransigence by Euston management and a lack of sufficient ticket gateline staff by the train operators.
We're really looking forward to Windows support - I don't think any Actions runners vendors supports Windows at the moment and we were looking at building our own runners as a result, but if this launches soon, we'd be very keen to try it out!
Yup. At the end of the day these logic-bomb-esque mechanisms are unpreventable and just a cat-and-mouse problem.
There should be a way to battle this outside technical measures, like a crowdsourced group of real distributed humans testing apps for anything malicious.
You can detect both the triggered behavior and "hey this looks like a logic bomb" with static analysis. Yes, you'll never trigger this with some dynamic analysis of the app. But "hey, some code that does things associated with malicious or otherwise bad behavior is guarded behind branches that check for specific responses from the app developer's server" is often enough to raise your eyebrows at something.
In this case suspicious code is anything that achieves a fairly narrow subset of possible outcomes so I doubt it would come up much.
It’s a common fallacy to assume infinite worlds result in every possible world but 1, 10, 100, … is an infinite series but is only covering ~0% of possibilities.
Yeah. It's really really hard to prevent actors from coming up with clever ways to circumvent the automatic checks. But that just means that apple needs to play the cat-and-mouse game. That's what they always say their cut is for, no?
One of the reasons that Apple does a bit of due diligence during the onboarding of a developer and establishing a developer agreement is to ensure they can reliably take legal action against developers that abuse the system.
The possibility of being banned from the Apple App Store ecosystem and/or legal reprisals is one way to deter unwanted behavior that can't be blocked through technical means.
I think AI is close to the point where synthetic users will be indistinguishable from real ones. To mitigate the techniques above and elsewhere in this thread, I think Apple will quickly move towards making the review process both highly automated and continuous.
It really means we need to lean into what ecosystem and government pressure we can apply to ensure the terms are sensible and fair because it will become nearly impossible to hack around them. (I do think many of these hacks are clever, I just don't think they will be enduring.)
I find your comment overly optimistic. Review processes are already highly automated and effective, but even as they advance there will always (short of AGI) be (1) effective tricks to mask behavior from analysis and (2) a need for a human in the loop to verify findings.
I do agree that we need to continue to apply pressure to tear down walls around the garden as a means of protecting code as speech, including the ability to distribute and run it on our devices without burden.
It does launch an instance and take a snapshot but what's happening is the sysprep and OOBE stuff that can take 10 mins or so (you can find it in the console and startup logs). That's a lot more overheard than just hydrating an EBS volume.
I'd simplify the guidance even more: short hashes are fine if and only if the repository hasn't been modified since they were generated (AFAIK, git automatically increases the length of the short hash it displays whenever it would otherwise cause a collision).
Which one is this? (Taxi = Uber, Hotel room = Airbnb, Unregistered security = various crypto?)
reply