The root of the problem is not that we have bots, but that we have normalised lying and deception as part of everyday business. We allow companies to pretend that bots are human beings, and allow call-center employees in third-world countries to pretend (even sometimes though elaborate lying) that they are located in the same country as you. We allow companies to tell outrageous untruths in their advertising - see the Samsung ad which they're currently being hauled over the coals for in Australia.
That's the real problem here, and the one we need to fix on a general level, not by band-aid regulations over whichever dishonesty has managed to irritate enough state representatives.
For people not in the know, apparently Samsung is marketing its latest phones as water proof with ads showing surfers and people fully clothed under water. ACCC does not approve of this, as the phones aren't IP rated for use in salt water.
That's... surprising. To be clear, I'm not saying I don't believe you. But I'm definitely surprised - I've got a Series 2 which I get wet every day and even use while swimming and I haven't had any issues. My mother has a Series 2 or 3 which she also regularly uses while swimming.
Per Apple's documentation
> Apple Watch Series 2, Apple Watch Series 3, and Apple Watch Series 4 may be used for shallow water activities like swimming in a pool or ocean.
Not sure how a watch intended for Triathlon use has a reputation for the band breaking at the slightest stress and the inside of the watch fogging after an ordinary dip in 3 feet of water. How does a watch like this pass QA and not get investigated (or sued) for false advertising?
A lot of the things we take for granted about the benefits of living in a capitalist society are side effects of capitalists traditionally not being very good at their jobs and giving customers stuff/benefits unintentionally. That seems to be going away thanks to IT and increased testing of assumptions.
The malfunction happened to me after owning the watch for 14 months, and I swam with it in the ocean on month 4 with no problems. It was also fine on the first day, the issue started happening the day after. My theory is a very small amount of water got in, but it was enough to damage the oled screen inside.
The solution would be not using glue, but using screws or a bayonet on the back, but then it won't look like a seamless object. Maybe in the new non-jony era of apple they will make their watches properly waterproof? I doubt it although.
Sorry to hear about your watch. I’ve been lucky enough to have it in the water plenty and never have a problem. Maybe go to an Apple store?
Unless it's up front and center as part of set up, it's not being upfront.
It‘s up front and the fact that it takes more than a bullet point on the back of the box to explain the details and limits of the feature doesn‘t change that.
Here‘s the description from Apple‘s official Watch marketing page:
> Sweat, surf, and swim proof.
Apple Watch Series 4 is water resistant to 50 meters and tracks both pool and open-water workouts. Turn the Digital Crown to eject water from the speaker using a burst of sound.
I don‘t see this contradicted by additional support document.
They also don't disclose that they have water damage carve outs, which shows they know their water resistance wears out quickly enough that they need the carve out for financial reasons.
"Swim proof" the watch is not. Apple's advertising, whats in the box & paper manual, what copy is shown on the store, and what text that shows up on the watch when you set it up does not warn of any of this.
Also go ask a cross section of the population what water resistant to 50 meters means and they probably will give you the more common understanding of waterproof.
Even if I wasn't worried about the water, what about beach sand?
If an advertisement told me my phone was bulletproof, and showed a video of one getting shot, I would take extra precautions to ensure that it never came into contact with loose bullets, and to store it well away from ammunition or containers that previously held ammunition. That is how much I proactively mistrust ad content.
I didn't want to post yet another "reason #651235 why I loathe adtech" comment, but that's really it. Bots pretending to be people wouldn't be of interest at this level if it was just criminals trying to scam people - it would be just another type of crime. This is a problem because of the almost-fraud tactics of sales and marketing that, for some reason, happen to be on the right side of the legal line. I suggest we move that line to solve this problem.
2pt font on page 14: Except this page of exclusions, and conflicting exclusions, and the fact the company can add any of their choosing at any time without notice.
That copy should really be "for deception purposes only".
How about foreign agents trying to influence elections at scale, or inciting civil unrest?
Corporations can't go to jail. They only pay fines that are a fraction of their profits from the illegal behavior, which is a mathematical incentive for them to break the law before their competitors.
Even in silicon valley, we laud AirBNB, Uber, Facebook, Google, et al.
Do you want corporations to be better or not care?
Do something minor? 10% dilution. Do something worthy of the corporate death penalty? Issue 100 shares for every share outstanding.
Of course, the issued shares would have as many votes as the maximum currently issued share (so if the founders get 10 votes per share that is what the victims get too).
Note that you can set precedents based on percentage ownership, so this naturally scales to large (and small) companies.
If this were common practice, I guarantee you that companies would be a lot more careful to obey the law: The ceo, founders, and investors all would have a bigger personal financial incentive to obey the law than to break it.
Typically, the victims of corporate malfeasance are considered to be the (existing) shareholders, in the American system. Like, there's a lawsuit on behalf of the shareholders when pretty much anything goes wrong that affects the stock price. There's quite a contrast to the common opinion on HN and elsewhere, where the shareholders are identified with the corporate malefactors.
Among those: if lawyers get 30% of settlements, then large consumer-rights law firms could end up as major shareholders. That could get v. interesting several ways.
Airbnb isn't even public, so the shares are essentially worthless unless someone takes on the expense and hassle of pooling them together and finding a private buyer. And even then, there's a major imbalance of power because private buyers are usually wealthy individuals/funds and know that the seller is desperate for cash.
Someone does something bad at a F100 company? 10,000 employees lose their jobs supporting their family.
> There are a thousand hacking at the branches of evil to one who is striking at the root, and it may be that he who bestows the largest amount of time and money on the needy is doing the most by his mode of life to produce that misery which he strives in vain to relieve.
successful Tech founders are especially guilty of producing what they strive in vain to relieve.
I agree companies should be forced to tell the truth to their customers, but in all cases where they're asking for details about the customer service rep? That could go weird places.
If a caller demanded to know the rep's HIV status, we wouldn't insist the company disclose it. We'd probably demand they didn't. Honesty is important, ok, but the customer has no reasonable need to know that. So, ok, it's not as simple as "tell the customer everything." There are some judgment calls the company has to make about what's important to disclose and what is irrelevant. The service rep's race, religion, medical history, or sexual orientation are probably not up for discussion. Why not national origin?
This isn't just about fairness to the rep, but the majority of customers need the company to hold this line too. If some portion of the population strongly believes that foreign call centers are lower quality, are you going to get the best possible answers on a survey if you say, "this was a foreign call center, how was the quality?"
Disclosing irrelevant details can sabotage collection of unbiased information on call quality.
If customers as a whole want improved service quality, they'll want the company to be able to collect unbiased post-call survey results.
If call centers in country X are all terrible, unbiased surveys will reveal that. If it's really about training, and there are good and bad call centers in several different countries, unbiased surveys will reveal that too.
I know passions run deep on this one, and it's a hard case, but this one seems a little more complicated than the others.
"Hello, it's [fake name they can't quite pronounce chosen to be a common UK name] calling you from [company they don't work for but who ordered a phone advertising service], it's lovely weather here in [small UK town they mispronounced badly and where it's terrible weather today], how are you today? ... Great, we are calling about your [account with company you don't have an account with] ..."
It's basically all lies, and all unnecessary, trying to win some trust as part of a marketing exercise. Bleurgh.
Because it has no bearing.
I think a more interesting question is if they wanted to know health information that would be relevant, such as specific mental abnormalities that would influence a person's ability to lie. Then again, even that question is moot because when you ask someone if they are a liar, the truth tellers and liars both answer no.
Current location matters because of laws. If the other individual is in the US, I have a better idea of what recourse I have if my information is misused.
And I don't think this can be dismissed outright as contrived and unreasonable, especially in some cases like doing business in China where IP is treated very differently
But if we use the excuse of the customer having a right to know the laws that bind whom they are talking to, gender and race become an interesting case as well. In a fair and just world, they shouldn't matter one bit. But given our current world, we know that the legal system is not applied to them equally. Racial and gender disparities in sentences can be off by a factor of 27 (this was a back of the napkin estimate I did some years ago based on sentencing data, and was based on the same crime, so it does not include any disparity in what charges are brought). Does a customer have a right to know that the person they are talking to is privileged when it comes to laws being applied and thus has less disincentive to engage in illegal behavior?
>If some portion of the population strongly believes that foreign call centers are lower quality, are you going to get the best possible answers on a survey if you say, "this was a foreign call center, how was the quality?"
Self reports are one of the worst forms of data collection, so it is a bit sad that so many companies still depend only among it when rating their employees. More objective metrics have massive problems with being gamed, but at least they are more objective.
You may or may not be aware but there are companies that actively instruct their international call center personnel to tell customers "My name is Sally and I live in Fort Worth Texas, right near you" when none of that is true.
This seems very far from the OP's thesis that in our culture there is a normalization of lying by people trying to sell us things.
National origin is different than where the call center is located. A company is under no legal or (in my opinion) moral obligation not to “discriminate” when it comes to choosing which country to locate their call center. I
I don't think there is a "general fix". You just have to constantly show them where the boundaries are and the cost of crossing them.
Taking advantage of the expectation, that a general fix is possible, is what gets Charlatans and Panderers into power.
That said, despite the fact that it has been easier to take advantage of the weaknesses in the population faster and at scales never before, (imagine a shark, overnight, growing more efficient at hunting/doubling its kill count and hunting grounds, and that trait spreading to all sharks by the next day) that same pace causes natural born predators to bump into each other more, clashing more frequently, expending more resources/energy in empire defense.
There is no free lunch, even to the mindlessly ambitious douchebags of society. They put up an appearance that there is.
And that's a problem in this case, because it's not like any of the predators in ad industry actually dies. They just deliver less money for the parties that employed/contracted them.
Advertising has a different dynamic than regular predation, because in a saturated market, effort of any party serves only to cancel out the efforts of every other party. It's a zero-sum game that can consume near-infinite amount of energy and resources. Now think of all the man hours, electricity, fossil fuels, papers, paints, toxic chemicals and human dignity - all wasted in a zero sum game - and tell me again that advertisers "expending more resources/energy in empire defense" is a good thing. It's the opposite - it'll eat our economy and kill us through side effects of all the resource wastage.
Hmm, I'm not saying that you unintentionally found a solution here.
(I mentioned the Nike thing not because this is limited to conservatives but because that was the most recent example which came to mind: there were a bunch of well-promoted tweets from accounts using profile pictures of attractive young women which a quick tineye search showed were long-running Instagram users under different names)
Free choice and all, but without disclosure (pre-purchase) a company shouldn't be able to "cost manage" their support function by using an algorithm fronted by chatbots.
(Once Verizon tried to charge me an ETF despite being a couple of years out of contract. The phone support was horrible and I thought it might have just been an English fluency issue but then I called the executive office and the person who fixed my bill just casually volunteered that the people in the first-tier call centers cannot in any way reverse charges or escalate to people who can, and aren't allowed to say that. They understood the question but their jobs were literally on the line if they told you that they couldn't help.)
As Google and PayPal have shown, it's generally a cluster$&#* (at best), even with state of the art technologies.
Furthermore, it provides a fundamental shift in the power dynamic by removing the possibility of whistleblowers and ethical counterpressure. You literally have management as the sole arbiter of algorithmic settings, with (currently) no legal disclosure requirements as to any internals.
That seems pretty screwed up from a free-market, transparency perspective.
Kidding, of course. Mostly.
Disclosing IVRs before purchase would be a good start. Preferably with a legally mandated recognizable logo that customers can quickly learn to associate with terrible support.
Norms and regulation/law can work together. Either alone are insufficient, I think.
Strong international norms and regulations are probably needed as well. As we clean up our domestic affairs, we create more space for hostile state actors to fill.
This is a very important observation. The world most of us want is one influenced by norms we work together to define & change over time. Some see this as an opportunity to exploit for personal gain and we eventually get imperfect regulation that tries to reflect what we originally intended, or worse, is enacted by the same bad actors to coerce the bad behaviours we resisted in the first place.
I see this all the time in growing companies:
1. culture dictates expected behaviour
2. company grows
3. culture weakens
4. norms get violated (intentionally or not)
5. process gets decreed to address transgressions
6. everyone looses the shared benefit/responsibility of autonomy
7. GOTO 1
How would you fix, at a general level, the overall human & corporate tendency to play loose with the truth? I mean, we're subtly dishonest all the time.
To avoid capture, require all watchdog individuals to be masked / anonymized, with prosecution flowing through the DoJ.
Bots? Just a manifestation of the above.
Could corporations get around the restrictions you have in mind by subcontracting speech to people?
Minor PSA: If you ever immerse your IP68 rated device in anything other than water, make sure to give it a rinse in water ASAP. Preferably not too high pressure water either.
Having anon accounts is good for HN but as soon as money is involved we need a structure that solves the identity for all people inside the transaction.
Technology has to become 1984 levels to ID users at all times. Better to have an identity relationship with a structure or body that you can meet with in real life, like a bank. A mix of tech and human relations is required for a sane identity relationship with this governing body and therefore everybody you do business with on the net.
Lying goes back to manageable levels once the identity is linked to someone's real life reputation.
This is a minuscule experiment I am running: https://gitlab.com/simonebrunozzi/dark-companies
I have a friend who worked at a call centre located in asia for an American company. They were threatened with termination if customers heard them speaking in their native language and the training for the job consisted mostly of faking an American way of speaking as much as possible.
I know the normal reaction is more fines and regulation, but then you're using bureaucracy and court cases to fight people who are masters of bureaucracy and court cases.
Needless to say I have spent a couple of minutes repeatedly asking a question, and even rephrasing it while being frustrated that this person does not seem to grasp my issue.
No matter how many people know better, there are plenty more that don't.
To be fair though, I'm annoyed by human first level support for similar reasons most of the time but at least escalation/problem solving with them is easier than a bot that explodes when you go off-script. I think, personally, that making me talk to your fancy answering machine is just devalueing. If a company wants my business they should have human support and pay them enough to work with the customer.
This assumes that people who have a question they need answering will consider searching for an answer or getting to grips with the UI first, rather than heading straight to ask customer support. That's often not the case. Long before support chatbots became widespread, telephone trees made a point of telling every single user in the queue the help section of their website existed...
Agree bots should often make it easier to escalate and should default to escalation rather than blowing up, but if a large proportion of your queries are actually answered by telling people the FAQ page is a thing, it's a bit difficult to justify paying humans to do it.
if escalation takes a couple of minutes for a response escalation and blowing up might be indistinguishable...
What portion of the chat pop up windows do you think are purely bot? Might any of them be purely human?
1) The bot initiates the conversation and your initial message gets sent directly to an agent. This is the older model that's in most common use today.
2) The bot initiates contact and based on your responses does some simple keyword matching and delivers help article links where possible or asks for more information IVR style, then when it hits an "I don't know" point or if the agent option is selected, offloads to an agent.
3) This is my favorite style, honestly: The bot initiates the interaction, and does some machine learning backed AI chat, all the while the interaction is monitored by an agent who can take over at any time. Similar to #2, if the bot hits a sticking point, it'll just queue to an agent. This unfortunately is the least common of the implementations.
4) This is the most modern and is becoming the industry leader: Fully AI bot trained against a veritable Everest of chat conversations for that entity/industry, only offloads to a human when you shout "HUMAN" at it enough times or if it gets really stuck and confidence intervals start falling rapidly.
NOTE/DISCLAIMER: I design and implement these systems for a living, and we don't often get much say in the customer-side UX, so I'm sorry if you've gotten stuck with an arguably bad build!
Nobody will notice, you know you want to add it ;).
That's my assumption and if I'm not in a rush, pretty ok with it.
But yes, some of them are contact boxes for real people.
Or, as you did, just say something that puts them off script.
I don't know about that. Does Amazon's support use bots? Or are they just in a remote call center using auto translate to communicate with customers?
Because in general, they appear like bots to me: they will parse for key words and reply to those, ignoring context and subtle differences that make their replies sound weird.
Depends on the section of Amazon. AWS enterprise support does not.
If individual states each enact individual laws governing the internet, then only large companies will have the resources to follow them.
We'll see a balkanization of the web wherein it's no longer very world wide. Small internet businesses will become harder and harder to start. Big monopolies will become entrenched.
It's not pretty.
It's small-business webapps that have problems. But I would be very glad for those to stop doing so many terribly sketchy things that have become the norm on this consumer-unfriendly internet we're dealing with.
So like I'm super afraid of a "regulated internet", but I'm also super sad by what's happened while it's been unregulated. Businesses, governments, even ICANN have done terrible things to the internet. I'm not very optimistic for any outcome anymore.
The law is already so complex that even within a single country or a state, people who study it need specialisations. Regulators have no grasp regarding existing laws or full extent of laws they are voting for. It's madness.
Small business that government or a big company doesn't like can already be killed by thousand cuts because they don't have infinite amount of money to spend on lawyers.
What you refer to as “analytics” is actually stalking and I’m glad we have regulation around it now.
By the way, you don’t need to display a warning, you need to ask for consent in a non-intrusive way (consent should be freely given).
Or just you know, don’t stalk people. Seems like an easy enough solution.
Article 2 of GDPR defines the scope of the law.
> This Regulation does not apply to the processing of personal data:
by a natural person in the course of a purely personal or household activity
Re analytics part from another comment, I only care how they interact with my website. Without sharing that info with anybody. You could avoid cookies if you really wanted to but then you actually collect more data about your visitors.
A decade from now, the state might target certain sites with forums that say things they don't approve of, and use these laws to do so. Is giving the state more tools to circumvent freedoms like freedom of speech due to technicalities a good thing? How about in stead of making a law that blankets many benign interactions and relying on enforcement of only actual problems be up to the opinion of the enforcers, we try to target the actual problem behavior as tightly as possible, and then identify the edges where problem behavior exists and pass new legislation to apply to that?
Apart from that, an internet forum doesn't seem like a ridiculously obscure edge case.
Not having both of those is a good thing, so I see no downside.
And as long as state laws are limiting abusive behaviors, I don't see what the problem is -- then we all follow the new "floor" and the whole country benefits. A small start-up can not choose not to lie about a bot being a person just as easily as a tech giant can.
The problem is if local laws conflict with each other or impose a significant burden. But then the federal government generally steps in quickly and shuts things down thanks to the interstate commerce clause.
So I don't think there's anything to worry about here.
I accept the general premise of your point, but this particular law isn't asking anything onerous. The societal benefits that come from limiting the ways in which political and commercial bots can decieve the public are just too great to claim this is a bad thing.
2) Commerce and communication restrictions should not be considered without also considering the purpose and benefit the legislation intends to cause. First and foremost is whether it actually helps more than it harms. For example, it could have been (and I'm sure was) argued that abolishing slavery was problematic from the standpoint of groups of people and businesses interacting with those states and what it meant to use slaves in them or take slaves to them. The point is not to equate this issue with the abolitionist movement, but to point out that how the law affects interactions between states may have very little (or very great) bearing on whether it should be ratified, and it depends entirely on the legislation in question. Great leaps in both positive and negative directions both cause friction between (nation)states, so friction itself is a poor indicator of whether legislation is good or bad.
The "laboratory of the states" thing is a legal nightmare in a networked world, but it's inevitable under the current organization. Well-crafted nation rules would be better, but a lot of people seem to want it this way.
For generations people who wanted to do business in another jurisdiction had to follow the law of that place, even if it meant physically traveling there or opening a store.
Internet businesses would still have things much easier, as a change in software can be done by someone working from home or in Chennai or what have you- without any capital expenditure.
Dont make this out to be armageddon- it’s not.
Or taking the European GDPR regulations as an example actually caring a little bit more about the user' data and enabling informed consent.
Lawmakers are purposefully vague because judges can decipher what the spirit of the law is and fine corporations or condone specific use cases when they are brought up in court. You can't go into court to challenge a law with hypothetical cases for a good reason. Do you want lawmakers to arbitrarily impose constraints like only 10% of non article words can be suggested per message composition or only 2 posts per minute are allowed?
It is a fact in life that technology changes and improves things beyond what we could have foreseen in just a few years. The degree of flexibility built into these laws is a huge plus. Not a flaw.
It's a work related disease. A coder must consider all corner cases in advance. There is no judge to decipher the spirit of a program.
Me: "So, what should the program do when XYZ occurs?"
Marketing: "Uhm... Dunno, haven't thought about it. I'd decide by, you know, gut instinct. We haven't thought about that yet."
Me: implements a virtual coin-flip using Random.Next()
Once you've done that, it's easier.
Chances are, if the requirements person doesn't have an opinion on what the copy for the dialog should be if the customer is 65+ and it's a Tuesday in a month with 31 days, then it's because that choice doesn't really matter all that much.
Vagueness in law isn't a good thing. It leads to people randomly losing everything, even if they made a good faith attempt to follow the "spirit" of the law (whatever that actually is), for no better reason than lazy or incompetent law making. After all, lawmakers can easily update or change laws to reflect changing circumstances - they just prefer not to because regulating entirely new areas of life makes them feel better than the relatively boring work of updating existing laws.
Society changes faster than representatives can legislate.
That leads to the judicial branch actually making the law and not having a consistent set of rules depending on who the judge is.
The problem is then nobody actually knows what the law is until after the judge decides it, at which point they're essentially creating new rules ex post facto and applying them to past conduct. It's manifestly unreasonable to apply a rule that wasn't known until five minutes ago to actions that took place last year.
> Trying to enumerate every legal interpretation and eventuality based on today's conditions and technology results in a law that won't be meaningful 5 years from now.
Which means you may have to pass a new law in five years -- that's not a bug. For that matter, if you expect things to change significantly then you may want to make the current rules expire in five years automatically, or hold off legislating anything at all until you see how things shake out on their own.
Do you want to build a business based on the whims of a judge when you thought that you were following the law?
...in which Matt Levine juxtaposes the concepts of going by rules versus what he calls "legal realism".
No, precedent is foundational to common law, not western law as a whole. There are two main forms of western law.
I also think it’s an issue, as this law is set up to start a cat and mouse game, where precedents are slowly established, while bad faith actors find other workarounds and run with it until new rulings are set, to then rince and repeat.
When it comes to spam or ads, iterating workarounds is faster than bringing cases to court, so the traditional approach is problematic.
Because many people here are programmers, and finding edge cases and how to deal with them is often a significant portion of the job.
I assume it's also a large part of many lawyers jobs as well.
2,3,4) A judge could reasonably find that you were attempting to circumvent the law and declare all of these as “bot”.
The judicial system will not specify in writing complete coverage for every loophole. Judges can, regardless, find you guilty.
Unless you were disabled and using assistive technology.
It is when the substitution is both context-aware and not what you intended to write.
> 2,3,4) A judge could reasonably find that you were attempting to circumvent the law and declare all of these as “bot”.
Wait, so if someone sends you a question and the suggestions can detect from the context your answer, you're a bot because you chose the suggestion instead of typing out words with the same meaning? Then aren't most people texting going to have to declare themselves bots?
> The judicial system will not specify in writing complete coverage for every loophole. Judges can, regardless, find you guilty.
"Judges will decide something" is no help to you when you're trying to predict what they will decide ahead of time. Finding out after the fact does a fat lot of good after you've already engaged in the behavior in question and an unfavorable ruling puts you in jail.
The law clearly targets automated content creation that is not declared as such, not assistive writing technologies, and this will be considered by the judicial system when evaluating your stated intentions and actual actions. If you are unable to predict with confidence the outcome of your intentions and actions as they may be interpreted by the judicial system, please seek legal counsel for further guidance.
How are those two different things? In each case it's a machine generating and suggesting things that you may want to write. Presumably in the second case the suggestions would have to be more sophisticated in order to be coherent most of the time, but that still doesn't really give you any useful criteria to distinguish them. We're already at the point that phones have context-aware word suggestions. There isn't really a principled line to draw there at the point where the suggestions get good enough to constitute the entire message. It already happens sometimes.
Do you intend to prepare your thoughts as written word, and you use technology to write those thoughts rapidly? Then that’s probably fine.
Do you intend to prepare written works written by algorithm, software, or technology, to a degree that the work can no longer be reasonably considered the creative output of a tool-assisted human and is now instead the creative output of a human-assisted tool? Then that’s probably not fine.
If you want another way to look at this problem, imagine that our society grants algorithms copyright over the works they produce with our assistance, while granting us copyright of the works we produce with the assistance of algorithms, and that the law demands all algorithms be credited (CC-AT) when their copyrighted works are republished by humans. Copyright law has significant experience studying the problems of entangled and commingled ownership of works, but it’s too soon for US society to grant copyright to algorithms over their works, and so this law is all we get today.
> If you want another way to look at this problem, imagine that out society grants algorithms copyright over the works they produce with our assistance, while granting us copyright of the works we produce with the assistance of algorithms, and that the law demands all algorithms be credited (CC-AT) when their copyrighted works are republished by humans.
That's just restating the question, not answering it. And the hairy mess used for copyright is not a very promising thing to aspire to.
That will probably be distinguished between by a judge looking at all the facts that apply to a specific case, and making up a decision. Details such as these are the reason why there's a justice system with actual humans in it and not just some software bot calling shots by following if-then-else statements written in law documents.
They are of different colour. Sounds like the law is aiming at that distinction.
Whether or not a piece of computer-generated content was "automated content" vs. "assistive writing" might entirely depend on the answer to the question "why was this piece of writing created?".
 - https://ansuz.sooke.bc.ca/entry/23
Believing so is a common misconception amongst engineers, but depending on it as such is likely to lead to disappointment, frustration, anger, needless bickering, extended conflict, and vexatiously long, hard to read, and mostly unenforceable contracts.
I mean, people have tried! Ethereum created a system of contracts implemented as a programming language. Know what it led to? People losing huge amounts of their money after someone found a bug in the contract and exploited it. And after that, the money was gone. The hacker had followed the contract as written, and the money was theirs now.
Ultimately, the only ways that situation doesn't play out is if the system is designed perfectly not just for current use but all future uses, or Humans are removed entirely from the equation. Since the former is impossible, and the latter means the system is either irrelevant or we're all dead and gone, we might as well accept Human intervention as inevitable.
Above all, if the law would be code, who would decide the input? Unless every conversation and record is already in the Law-Bots huge power is given to the "formatting" of the evidences.
Ignoring predictable ambiguities to be resolved by the subjective whims of the judiciary is not the rule of law, and the fact that it regularly happens doesn't change that or make it right.
Loopholes are just day 0 exploits of the legal system.
Making the law clear and correct is the only way to prevent the ambiguities from being construed in favor of whoever has the most money to spend litigating it.
Probably for online, similarly one post would be allowed for each individual human approval, unless you ad the bot disclaimer.
If I made custom keyboards with just a few choices each, I think I could probably handle half a dozen without any problems.
And if I'm literally just always telling the bot "okay, go" with 1 button, I think I could handle dozens.
Who would those bots be? wccrawford1, wccrawford2, wccrawford3? That would not pretend to be anything other than bots or clone accounts.
If on the other hand your bots would be "Brock Samson", "Joey America" and "Chip Dipsby", etc. That would clearly be non-humans pretending to be humans, regardless of you tripping the first/last domino piece.
If I had live people answering, but their names were randomized, would that not be okay?
I honestly don't see any real difference between naming the bots like that. Yes, some people will be less likely to think they're bots if they're all named differently. But the vast majority of people will not notice if they get contacted by several different employees/bots all named Brock Samson. Most people won't even talk to more than 1 employee in a short timeframe to even have a chance to know there was a difference.
Actually, he is just asking what the exact definition of a bot is (something every scientist should ask in this context).
If I launch a new tab in the background and tell it go establish some set of factors for me, or locate price points and details for me, or buy something for me (and right now as me)... or just have it let me browser and interactively direct it but have it block ads as I go.
I know the law, and lawmakers, are looking at this from a fraudulent content perspective, but they are going to be hard pressed to do anything in long run to quell this.
I doubt that anyone running bots, and who is technically competent, will be identifiable or findable. I mean, I could do it, and I'm just a random anonymous coward.
> I doubt that anyone running bots, and who is technically competent, will be identifiable or findable.
Telemarketing, for example, was done a great deal by perfectly legal, traceable businesses. Once it was made illegal, it was forced underground, and volume dropped immensely.
> I could do it, and I'm just a random anonymous coward.
Could you? Hiding the flow of significant amounts of money is actually quite hard. Robot salesmen masquerading as humans would be a plague, and I think this law should keep that from becoming a legitimate business technique.
It's true that getting assets from cryptocurrencies is hard. I don't do it. I just spend my ~anonymous income on ~anonymous servers to play with.
But if you're moving enough assets, you can pay people, who know what they're doing, to move it. As we've seen in real estate markets in many cities.
- Back when SR1 was starting, he got a visit from the FBI about fake ID that he had ordered, shipped to his actual address in SF. And he basically admitted that he bought them from SR1.
- He posted to at least two sites about SR1, using accounts linked to his real name.
- Logs in a SR1 server pointed to an IPv4 address that he used in SF.
- Apache in a SR1 server was misconfigured, such that errors were accessible via clearnet, instead of via Tor onion.
- He worked in public with a FDE laptop, which contained everything about SR1. Including IDs for all staff. And he didn't take steps to enable emergency shutdown.
And about Bitcoin. One of my favorite mixing services, Bitcoin Fog, has handled huge amounts of Bitcoin, from various thefts. And nothing has ever been traced, to my knowledge.
Finally, where do you get "businesses trying to be legitimate"? Maybe their customers are legitimate, but why would you say that about the bot providers? They could just have a credible cover operation.
As to the other bit: Bot providers need to be hired by somebody. If those people are legitimate businesses selling products or services, they will be traceable in the usual ways. It's those legitimate businesses that are of primary concern to lawmakers in this legislation and in most business regulation.
I've also often thought that the cheap bots are easily identifiable (tweeting every 3 minutes, 24/7/365) and that a better bot shouldn't be that hard to build, but then again, I've figured somebody to be a bot that actually wasn't, he was just very invested in the topic and had plenty of time on his hands.
Google's Duplex is an example of this, and I agree with the law that its use should be disclosed.
The law doesn't have to lead to prosecution of every little criminal. If it polices some of the large players who can't easily hide what they're doing, it'll be a helpful law.
Also, "breaking the law" is an ambiguous thing. I mean, that's one of my favorite Judas Priest cuts, and he was talking about breaking laws against homosexuality. Not that most head-bangers realized it, at the time.
Also, just about every US media company breaks Saudi laws against sinful use of sexual images. And nobody seems to worry much about it.
I'm excited to see legislation in this direction but I wish they'd focus on forcing Twitter, Facebook, etc. (which /are/ in California and can be governed) to display / disclose when they are aware a user is likely a bot, and employ some half-decent detection methods.
In practice, there's still a reason companies avoid breaking laws most of the time.
For instance, ARGs could have bot accounts for fictional characters on social media sites. These accounts could give pre recorded messages that then hint that the user should visit some third party site for more clues or information. Is that legally dubious? I can see it being so under this law, but I don't think it's comparable to a business running say an automated chat support system and pretending its bots are human.
Same goes with roleplaying bots on online community sites. These aren't a huge thing right now, but they could be in future, with accounts that act like NPCs do in video games or interact with the players account in side quests or what not. These don't seem like they'd be morally 'wrong' things to have on a site, but they'd probably get hit by this law regardless.
Point is, these types of bots don't necessarily only have dodgy use cases.
Unfortunately for the game (and the world), 9/11 happened a few months after launch and due to the theme of the game it was shut down. Now it's just an interesting bit of gaming history!
This time we've had to introduce some changes to abide by new state legislature.
Messages entered into the chat console must be followed immediately by the string " [I am a bot]", whether you are a bot or a human, but especially if you are a human.
Good luck and have fun!
These folks are essentially like bots, insofar as they are "programmed" to respond and significantly constrained in their latitude. They're like human bots, no?
I’d argue yes, but where they may have an argument is where I respond “no I am human” if people ask me about being a bot - intentionally misleading.
They may have more luck here in the commercial space where they can better regulate and enforce these ruleS like advertising and other sales practices. Not sure where this goes in politics or other domains at all in terms of enforeceability.
Human beings have human rights to express themselves however they wish.
No. Commercial speech usually has disclosure requirements, even in the US (see the Zauderer case), which humans have to follow. This is just another case of compelled commercial speech.
And it even uses a human name. Really dishonest.
It's prohibited from trying to sell you something or influence your vote, but Eliza doesn't do either of those things...
Unless companies start asking all human employees to start claiming that they're bots so as to subvert the new rule... there's not a law against that yet.