Hacker News new | past | comments | ask | show | jobs | submit login
Governor vows criminal prosecution of reporter who found flaw in state website (missouriindependent.com)
1300 points by davidw 53 days ago | hide | past | favorite | 678 comments



After the Affordable Care Act went into effect I signed our company up for our state's marketplace. While browsing our plan options, I noticed the url used a scheme like marketplace.org/employers/341/plans.aspx. Of course, I tried changing the number in the url to 342 to see what happened. To my astonishment, it loaded up the next company's plans, including a list of employee names, ages, plan cost, and SSNs.

After I shopped a few other companies to see how our plans compared, I notified the marketplace operator via the only link on the website for customer service. Within about an hour, someone from their IT department rang me on the phone and started grilling me about how many other plans I browsed, and insisted that I clear my cache and browsing history, and notified me that they would be watching to make sure nobody at our IP address didn't access any other plans while the issue was being fixed.

I was pretty surprised at his response, and assumed they would be more grateful for exposing a pretty basic flaw, but I guess a natural human tendency in these situations is to try to externalize the blame. Perhaps it's more difficult to hold yourself accountable than it is to assume that others who've found your shoddy work are malicious actors.


Unfortunately, this is the top comment and it has led to a lengthy discussion about the ethics of altering a url to retrieve a resource you should not have access to.

Which is a fascinating discussion, but has nothing to do with the case at hand which is where the underlying html on a publicly accessible search result page contained SSNs of the teachers returned in the search.

All the analogies about ‘it’s like asking the IRS for another document’ are all wonderfully applicable to this comment, but not remotely applicable to the actual article.


This entire thread is a great microcosm of how difficult it actually is to talk precisely and intelligibly about "hacking", permissions, intended access, etc!


On the other hand, to a non-techie person, where do you draw the line? Accessing the HTML of a public webpage is trivial to you and me. But what about decompiling or extracting strings from an .apk? Almost exactly the same thing as pressing F12 in the browser, but a tad more 'active'. It is relevant to this article, as it asks what hacking is OK, and what isn't


I wouldn't call any of your examples hacking, at least not in the "hacking other people's systems" sense. It's accessing information the other party actively sent to users. The fault lies with the one distributing and potentially exposing sensitive information through negligence.


If someone distributed a .apk that contained plaintext SSNs of their employees, I don't think I would call someone who noticed a 'hacker'.


I would say that finding and reporting exploits is ok in every circumstance. I would draw the line at using those exploits for malicious purposes.


Yes, it looks like it was built to search educator SSNs[1], so the devs just... put them all in the js. How's that for caching? Ouch.

1: https://web.archive.org/web/20210428154433/https://apps.dese...


I am now questioning the wisdom of having shared this story, and I apologize for derailing the discussion.


It’s a relevant comment, and people evidently found it interesting.


Yours is an interesting story. And very relevant. It just isn’t applicable to one interesting aspect of the article being discussed which is that the sensitive data was sent to every user but was “hidden” by html.

But the shoot the messenger aspect of reporting vulnerabilities is also very relevant. It’s just the nature of forums like this that some things bubble up to the top and dominate the discussion. Hard to say it’s your fault for retelling a story.


It's a good story and relevant. Not your fault the internet got spun up in a totally other direction with it.


fwiw, I found it relevant. It's obviously not _exactly_ the same thing, but "sending an HTTP GET request to a URL" is similar to "viewing HTML source" in that both are totally normally things to expect from a user, so it's hard to see how either could count as "hacking".


URLs are not secrets. End of discussion.


End of a different discussion than the one this news article warrants.


I found a similar vulnerability in one of our vendors' online order system. I noticed after placing an order an integer in the order confirmation page URL. I reduced it by one and refreshed the page. Sure enough, I got all the order details of the previous customer's sale. Reducing _that_ URL by one got the next previous sale details etc. I notified the company about it. They fixed it, and in gratitude sent me a small package containing a pen and other office kitsch branded with their logo. Not much of a bug bounty, but the pen has proven useful.


I let a company know that the url for their receipts (including name, address etc) was simply an md5 of the order number. They graciously offered 15% off on my next order as a thank you.


I feel like that would be a decent option for a surrogate key for public identification of an item and potentially cheaper than generating a uuid or something else. Maybe combine that with a salt and you have alright protection. How did you figure out that it was an md5 of the order number?


Presumably order numbers are easily guessable, so the md5 really offers no protection at all in this case and is no better than just using the order number


And the thing is, even if they can't be guessed, it's only 999,999 calls to try every 6-digit possibility. And you'd only take 11 days if you were nice and paced yourself to 1 req/sec.


Searching for that MD5 would probably be sufficient to find that out.


I think the main difference is the one between acknowledgment, action + (small) gratitude vs. fear, paralysis and scare tactics / trying to control the environment instead of fixing the issue.


I notified the State of Ohio about the unemployment site displaying full blown debug information in error messages (it genuinely errored out on me while doing legit stuff). The amount of information was very interesting and detailed, basically begging a malicious actor to probe further. I sent screenshots and a detailed writeup about what my next moves would be if I were a "hacker" straight to the CISO/CTO and their boss (my info is in that system!). No response...thankfully.


I did that once a long, long time ago with the organization that monitors maritime piracy around the world. They have a mailing list which I accidentally stumbled on that included I assume since I only saw the one page of email addresses that ended in top level domains like un.org and navy.mil thousands of email addresses. I contacted through email the people running the organization that I accidentally stumbled on the page and they should probably hide it which they responded thank you. If you have ever been to Washington DC you would know the amount of money military contractors spend to show the latest navy vessel to everyone at the Foggy Bottom metro station and other places where such ads seem unlikely. That was the mother of all B2B email lists for militaries and shipping companies around the world. I didn't want to play any games with it.

EDIT: Remembering it now, there were also email addresses with the Iranian navy as they coordinate with other navies to fight piracy too. Perhaps instead of sending a Rickroll I could have sent a mass email with Lennon's "Give Peace a Chance."


Huge missed opportunity for mass email of URL shortener link to the youtube Rick Astley video.


There were cia.gov email addresses in there too. When these guys don't get a joke and fixate on you, they really fixate on you. They are more clingy than that song.


Are you saying that the CIA is never going to give him up?


Well, they're never gonna let him go, that's for sure.

And they may also hurt him.


well they're definitely never gonna say goodbye


Redacted.


I think you just missed the entire point of the comment you replied to...


I think you just missed the joke of the comment you replied to...


> ... someone from their IT department rang me on the phone and started grilling me about how many other plans I browsed, and insisted that I clear my cache and browsing history, and notified me that they would be watching to make sure nobody at our IP address didn't access any other plans while the issue was being fixed.

An IT employee who doesn't know about VPNs. Sigh.


Maybe he was hoping OP didnt know about VPNs, it's not an uncommon scare tactic to imply being tracked is unavoidable.


I'm sure any further unauthorized access from random VPN IPs would have also been blamed on OP, unfortunately. "He found this out then an hour later random IPs exploited it. He must have initiated those VPNs".


VPN doesn’t matter here. OP made it clear he was logged into the system first. Presumably all data is blocked until you are logged in. And if you are logged in, IT admin does not care about your IP address when they have your username.


Unless the IT guy was accidentally letting it slip that there was no authorization implemented at all.


Which in context? Is very, very likely


Would be ironic since OP caught an exploit that their entire team wasn't smart enough to catch... yet somehow he wouldn't know about something as basic as VPNs?


Zero chance this was an issue of an entire team not being smart enough to check - everyone who touched this would immediately understand it wasn't in the authenticated flow. This smells like bad requirements being delivered to the implementers.


Or phone hotspots. Or cafes. Or home internet. Or open wifi's. Or language translation websites. Or proxies. Or a dozen other ways that do not require a VPN.


It is very easy for IT managers to put the blame on "hackers" intruding into the network, instead of assuming they created an insecure system. In many companies this can work.


Years ago, I worked at this place, they tried to install these new core routers. The first core router worked fine, but connect the second and the whole campus network would go into meltdown.

The network team could not work it out. The vendor could not work it out. But one of the IT managers had an explanation: me. Firstly, it was due to an OpenVPN I installed on a server (with permission-as a stopgap measure so we could remotely access the “next-gen data centre” because the networking team was taking too long to get the real VPN installed and it was blocking other teams on the project.) The explanation didn’t make any technical sense: the VPN is just an application, nothing to do with the core routers; but he wasn’t technical enough to understand that. They told me to shut it down, so I did (even though doing so inconvenienced the project), and lo and behold, it made zero difference to the problem. Then, he apparently even suggested at a management meeting (I wasn’t there but I heard about it) that I was sneaking in to the data centre at night or on the weekends to sabotage things, and that was why the new routers didn’t work. Apparently they even asked campus security for my physical access logs, which revealed I hadn’t been doing any such thing.

Eventually, the vendor worked out the problem. When you install the router, there was a step you had to change the VRRP IDs to give every router a unique ID on the network. Clearly explained in the documentation, obviously essential, apparently our networking team didn’t read that part. You plug one new router in, everything is fine; plug the second one in, well it still has the OOTB default VRRP ID, so now two core routers on the campus network have the same VRRP ID, and all the other routers got confused, and the whole thing fell apart. Both our networking team and the vendor’s support team were so focused on chasing some obscure bug they didn’t see the basic config issue.


Wow, I'll keep this in mind next time I complain about my manager.


r/talesfromtechsupport is full of these sorts of stories. I make a visit on days when my job is frustrating and inevitably feel better.


Did that IT manager ever apologize for accusing you of being the problem?


I don’t remember him ever directly apologising, although he was nice to me afterwards (and this was many years ago, memories get hazy). I think he was rather embarrassed by the whole incident, it turned out to be such a basic configuration issue and it took them so long to solve it. I only knew about the whole “sneaking in at night” allegation because my boss told me what he’d said at meetings to which I wasn’t invited, and I don’t think my boss was supposed to tell me what was said in those meetings, so I’m not even sure if he knew that I knew he’d accused me


Lots of these folks (like the governor) don’t even know the basics of IT. Zero knowledge. You can tell them anything and it will stick.


All hacks are ”sophisticated” because otherwise the other party would be ”dumb”


Hey, the only people we have to convince that it's not hacking are the insurance companies. When they start charging their clients for their absurd levels of risk and liability, we'll start to see actual change.


I very much want the blame to be on the person who broke into my house regardless of whether my door was locked or my window was open.


Which works great when there's some kind of access restriction in place.

If you wind up putting your tax returns in the 'little free library' you set up on your front yard, you can't blame others for reading them, then handing them back to you and not telling anyone else.

That's the proper analogy for what happened in the original article.


This doesn’t track at all. This is me telling you that your forms are on my desk and throwing you the keys to my office. And then after getting your papers you go rummaging around other stuff.

Like sure I’m accepting a risk that you could do that but you’re still a dick if you actually do.


Content you're serving on a public URL is content you have published. It's not your house and you didn't extend anyone any trust or limited access. You put it in The New York Times. Maybe you hoped no one would find it because it's on page B30 and most people only read A1. But people are allowed to read page B30 if they want to.


The point is that there's no keys involved, nothing in your private office. No rummaging either.

Publishing to a public web server is analogous to that little free library, out in the yard. No keys, anyone can look in it at any time. If you accidentally put something sensitive in there, where anyone can see it without any access control, you can't blame them for doing so.


Having worked for a NYC government vendor who, unfortunately, outsourced a huge chunk of dev work abroad due to low costs (and I assume the manager's shady relationships with outsourcers), the amount of bugs and blatant negligence I observed in the delivered code was staggering. Even with said mistakes the manager/project managers were more concerned with getting the project out the door, so once delievered, they'd ship usually without internal audit of the code.

It makes one wonder if this is the case with the healthcare site you used, and whether or not this outsourcing of dev is common practice among government vendors? If so, it seems that we can only hope for something to fix these situations, given that government seems to only care once shit hits the fan


I can understand outsourcing development, but I suspect part of the problem with outsourcing the development is that QA of the product is done by the same vendor.

"We investigated ourselves and found ourselves clear of any wrongdoing."


For IT it’s often that whoever makes software, also tests it. When dealing with outsourcers, there comes a level of complexity. Government contractors don’t have skin in the game, and hence motivation to appropriately handle this complexity.


People have gone to jail for incrementing integers in URLs like that (most famously, weev).


IIRC there was a recent story here in Germany where a court decided that the blame is entirely on the website owner, and incrementing an integer didn't constitute as hacking, as no security measures were circumvented (as a lack of authorization checks meant no security measures were in place).

So I'm hopeful that the courts are slowly starting to wisen up in that respect.


Looks like there is just a little bit more to that story...

https://en.wikipedia.org/wiki/Weev


Didn't he also give the data he found to Gawker before notifying AT&T of the issue? That seems like a pretty key difference here, but I don't know what weev was charged and convicted for.


"Conspiracy to access a computer without authorization", which was and is completely preposterous. The Gawker part is completely immaterial, it was still a total travesty of justice. The judgement was later overturned on procedural grounds rather than on the merits (which it should have been). He did nothing that merited imprisonment, and even less so his mistreatment there.


It's more accurate to say it was a travesty of law, but probably not of justice.


Yes, but "weev" is also a well-renowned internet "troll". Basically - he appears to take joy out of denigrating, humiliating, insulting and doxxing other people.

https://en.wikipedia.org/wiki/Weev

He's also a neo-Nazi and white supremacist. I do believe in free speech, but some of the things he does seem to take it way too far.

And he famously doxed Kathy Sierra, a female technical writer who created the Head First series. I actually quite like some of the books in the series, and it's incredibly sad to hear incidents like this which actively discourage females in tech.

https://en.wikipedia.org/wiki/Kathy_Sierra

I suspect there's more to the AT&T incident than just, oh, I found a flaw, let me responsible report this to the relevant parties in responsible disclosure.


Bad laws and a corrupt justice system are infinitely more dangerous than a single man, however unpleasant he may be. People pointed out at the time, that the CFAA is totally broken, but nobody listened because the victim was unsympathetic. Well, now we see in TFA how nothing has changed.

"Yes, I'd give the Devil benefit of law, for my own safety's sake!"

And it should be noted, that weev's turn towards overt neonazism (rather than just antisocial trolling) took place in prison, where he was mistreated.


Weev was very much a neo-nazi even before he was imprisoned, but I suspect he limited it to private channels.

I once infiltrated some of the IRC channels he used in 2010 or so and have logs of him saying extremely antisemitic things in earnest.

(The groups I infiltrated also doxxed people and used that information in smear campaigns, which is why I'm using a throwaway for this comment. I checked HN's rules and guidelines and couldn't see anything against this; if I'm wrong about this, I apologise.)


The GNAA was probably the first tech group to play the "am I Nazi or am I just joking?" dogwhistle with the earnestness we often see today.


in some public transport company ticket ordering website, someone discovered that the ticket's price parameter is coming from the client side. he decided to bought a very cheap one, then reported the incident. next day the National Terror Defence knocked on his door.


“No, I didn’t look at any other plans, but I’ve notified our lawyer who is now compiling the list of exposed company plans before she contacts each of these companies for class action suit proceedings”.


I dunno, this seems pretty normal. Just today news broke that in Germany some guy who found a flaw in a web-shop backend leaking the data of hundreds of thousands of people got raided, because the operator reported him to the police - and somehow both police and state attorney found it wise to prosecute him instead of referring the case to the GDPR officer to fine the operator.

It's pretty obvious that when you find a flaw you simply don't approach the people responsible for it, unless they have an EXCELLENT reputation of dealing with this. Otherwise do an anonymous full disclosure (edit: if you have an entity that routinely handles this sort of thing and has an EXCELLENT reputation, that would work too). If nothing happens, provide a PoC.

Of course people, even in IT, are kind of weird here. Somehow responsible disclosure got into people's minds as The Good And Proper Thing to do, and full disclosure being somehow irresponsible. Analogy: Some guy finds out the mayor is completely corrupt or does some illegal stuff. What do you do? a) Disclose this through e.g. the press b) Approach the mayor and try to get him to fix his stuff. Somehow, when it comes to IT security, people wanna see hackers do b) because a) would clearly be irresponsible. Wtf?


All this reminds me of the case of Lilith Wittmann [1], who got sued by the CDU (Germany's majority-holding party) in May 2021 because she discovered a security flaw in their election campaign app "CDU connect". Data from around 100.000 visitors and 18.500 election campaign helpers was not sufficiently secured.

She used responsible disclosure to let the CDU know of this flaw, got sued in response.

After an outcry from the community the CDU apologized to her and retracted the complaint and the proceeding was suspended in the end of August 2021.

It's pretty sad to see how people who act upon their best intentions, intentions which are beneficial to the society, are hit so strongly by those who are afraid to admit that they made a mistake. Hit in such a manner, that it tears apart the daily routine in a very negative way for months.

[1] https://lilithwittmann.medium.com/


Sad to hear just how common these sorts of stories are. I remember reading fairly recently about a guy who reported a flaw to a company working with the NHS in the UK (should emphasise this is an external company and not the NHS themselves) and ended up having to crowdfund his legal battle.


The CDU party no longer holds the majority :-)


No party ever held a majority in the Federal Republic of Germany. But the CDU was the largest party in the previous parliament, and part of the governing majority.


There is by definition almost always a party holding a relative majority (more seats than any other party), which the CDU did for the longest time.

You are correct that they did not hold an absolute majority (more seats than everybody else combined), ensuring that they always had to form a coalition to achieve that.


I've never heard of the word "majority" meaning "relative majority" without that qualifier, but I also wouldn't use the phrase "relative majority" to refer to what to me is clearly a minority, so what do I know :)

Nevertheless, it might be better to use unambiguous terms like "plurality", or define ones terms, when writing for an international audience.


Still can't tell if this is good or bad.


> a) Disclose this through e.g. the press b) Approach the mayor and try to get him to fix his stuff. Somehow, when it comes to IT security, people wanna see hackers do b) because a) would clearly be irresponsible. Wtf?

Huh? This analogy doesn't really make sense. The difference for software is extremely basic: if you publicize a vulnerability immediately, you give more opportunity for it to be exploited while it's being fixed. Malicious actors who hadn't found the vulnerability yet now get it handed to them on a silver platter.

Private notification simply gives the operator a head start on closing the hole before it's more widely known by potential attackers.


That's not the point of the analogy (some other siblings got it wrong, too, so the fault is likely mine). The point is that it's inherently very risky for you to contact someone about a problem they created accidentally, negligently or possibly intentionally in order to get it fixed (and that might result in them being fined or otherwise punished when the issue becomes known). So you should not do that. You should either seek a trustworthy intermediary for you to handle the interaction (this might be difficult / non-existent in your locale) or reveal the issues anonymously.

Again, it's not about Optimally Mitigating Corporate Security Fuckups, it's much more basic than that: it's about keeping you safe. This should obviously be priority #1. Anyone telling anyone else to do responsible disclosure by default because That's What Good Guys Do And You're Not A Good Guy If You Don't is quite clearly not putting the safety of the reporter at #1.


I see, yes -- I certainly agree with disclosing safely/anonymously.


> The difference for software is extremely basic: if you publicize a vulnerability immediately, you give more opportunity for it to be exploited while it's being fixed.

if it’s live it’s already being exploited. simple principle, but very effective.


Certainly. I said "more" opportunity.


Yeah, in my mind, the only "responsible disclosure" these days is one made anonymously to the local data protection authority.


Reading through these comments gave me the same thought. Notice a problem? Buy a raspberry pi with cash, visit starbucks, upload report about the issue to reporters via newly created (and never used again) gmail account, throw away raspberry pi, never talk or think about the issue again.


Gmail is probably not ideal. Last I tried I needed a phone number to create an account that actually worked.


this is a poor analogy because the IT department isn't doing something illegal, they are just doing something poorly, the proper analogy would be if you found out the mayor routinely left the special stamp that you can use to get anyone released from jail laying on the park bench he eats lunch at - do you then go around telling people hey the mayor does this or do you say hey mayor please stop taking that stamp with you to lunch because you always forget it at the park bench and someday somebody is going to use it to do bad stuff!

OR let us reverse the analogy

You find out Facebook is running an international slave trade by using their data to find vulnerable teenage girls sending them invites and then kidnapping them. Do you A) approach Facebook and try to get them to stop their practice B) alert everyone immediately.

The answer is you alert everyone immediately because Facebook in this example is doing corrupt and illegal things. There is a difference in how you should react concerning security problems that others can take advantage of and willfully committing illegal and corrupt acts.


> this is a poor analogy because the IT department isn't doing something illegal

At what point does it cross the line into IT malpractice? I would say that not even bothering to verify the current user has the access to view what is being requested is well over that line.

When you're dealing with PII, HIPAA, etc, there should be a standard level of competence. If I go into a doctor's office with a runny nose, and they remove my liver, simply stating that they practiced medicine "poorly" shouldn't be a defense.


That comparison is a bit off though, because exposing the mayor's corruption doesn't put other people and their data at risk.


Trot Hunt from Have I Been Pwned has an "EXCELLENT reputation".

Perhaps responsible disclosure could pass through his entity?

It's a way of anonymising the source to keep them safe, and centralising the risk to someone who is already highly regarded by companies and governments.


Perhaps many people are spoiled and blinded by the SV megacorp culture of (usually) taking in bug reports and fixing them and handing out recognition/money. It would be nice if everyone accepted responsible disclosure, but that's not going to be the case until some legislation comes along to require it in the absence of malice.


It's not "spoiled" to expect, at worst, a thank you for pointing out a serious and extremely easily exploited vulnerability in public-facing code. You are inarguably doing the company a favor by disclosing it to them and helping them cover their ass and and in some cases lack of competency.

Something shouldn't have to be literally illegal to be considered shitty behavior. (Of course, people are often incentivised to be shitty, which is why legislation should also be applied to the issue)


Umm, this seems to imply that these security vulnerabilities are intentional, which doesn't seem like what is happening. In your mayor example, you wouldn't go to the mayor because you know he is intentionally trying to break the law, so going to him doesn't make sense.

Incompetence is very different than malfeasance.


The problem is that the response, as it pertains to you, is going to be the same for incompetence or malfeasance in a large number of organizations. Consider what the average self-interested politician would do if you uncovered a corruption problem in their administration they did not know about. Are they going to fix the problem, reward you, and risk losing the next election beneath an avalanche of attack ads? Or are they going to bury it and crush you?

Large governments and corporations are not your friends. They will hurt you if it benefits them, often very short-sightedly and regardless of the root problem. There are far too many articles like this one to think "responsible disclosure" is a safe practice. I remember one case where the red team was hired by the agency involved explicitly to perform pentesting, and when they found a vulnerability the government pressed charges!


> I remember one case where the red team was hired by the agency involved explicitly to perform pentesting, and when they found a vulnerability the government pressed charges!

If the case you’re remembering is the one where the red team assumed (without asking) that physically breaking into the courthouse at night was “in scope” of their engagement, I’m of the opinion the short-sightedness there was not the agency…

https://www.cnbc.com/2019/11/12/iowa-paid-coalfire-to-pen-te...

It’s _maybe_ grey area. But there’s no way I’d escalate a pen test to breaking in to a courthouse without explicit in writing permission from someone clearly authorised to give it, including in writing assurances that all relevant law enforcement had been notified (at least at high levels, if part of the authorised physical pen test was actually testing on-ground law enforcement capabilities).


That is the case I was thinking of, but I went back to check my memory and it was not a gray area. They had a signed contract from the Iowa Judicial Branch and its Information Security Officer that specified gaining physical access to the building. Source:

https://krebsonsecurity.com/2020/01/iowa-prosecutors-drop-ch...

They did fail to verify that law enforcement was aware (the client specifically asked them not to) and they seem to have misunderstood the building's ownership structure. The end result was that they fulfilled their contract and were arrested for it after encountering one idiot with power, after which the local politicians piled on in order not to look weak.


Local CERT is sometimes happy to be a proxy, still best do anonymously tho


You got things the other way around it's not about the disclosure is about mitigation.

If one contacts the corrupted major for a timed disclosure, he gets time to hide crimes or can continue being corrupted, but the press running the story only damages the major.

If I run to the press with a vulnerability, everyone is empowered in exploiting it. Sure it puts lots of pressure on the devs, but devs can only work so fast, which creates a window of opportunity which damages both them and their users. A timed disclosure doesn't prevent exploitation that's already happening, but doesn't increase the problem by itself

The desired outcomes in the two cases are different, and it's no surprise different strategies are optimal.


> but devs can only work so fast, which creates a window of opportunity which damages both them and their users.

Sadly, time and time again, what in practice ends up happening is the window of opportunity is wasted by the devs being instructed to work on new features rather than fix critical security bugs the company thinks are not widely known.

Apple’s response to four zero days being only the most recent high profile example of that.


I think B would be blackmail?


Domestic abuse is pretty "normal" too. That doesn't make it tolerable.


These 2 things are not even remotely comparable.


So? Something being "normal" doesn't make it just. Or even legal.


Its not normal in society to commit domestic violence, most people in western society would find themselves ostracized from their peers if they were a known wife / child abuser. If I told my friends the website allowed me to see other plans and I checked them out they would just ask if I saw anything interesting and chuckle at the flaw. Curiosity is normal; beating your spouse is not.


Ah, I see the misunderstanding. The behavior I'm seeing called "normal" is people being punished in response responsible disclosure, where the actual guilty party is illegally leaking private information. I'm comparing administrative abuse to domestic abuse.

If changing a few characters in a URL was a crime, I'd be gone for life.

edit: and, I'm using "normal" in the same sense as the comment I was originally responding to: to indicate an everyday occurrence


How is it a bad response? They want to know what data has been exposed and ensure you delete that data. That's data leak 101. Why would you be defensive about it?


The point being that the IT guy made sure this guy will never try to report on anything again. As they will ".. would be watching .. at our IP address .. while the issue was being fixed."

Instead of a normal company having a bug bounty and sometimes even with cash prizes.

Do you think google "will watch your IP" after you reported a bug? or will they give yo money?

What helps in the short run? and what helps in the long run?


> Do you think google "will watch your IP" after you reported a bug? or will they give yo money?

I honestly think they'll do both - but they won't tell you they're watching your IP because it's needlessly antagonistic.


Because you have no way of knowing if they deleted the data or not from their system. It's a pointless exercise, unless you're just gonna take their word for it.


When someone is kind, helpful, and goes out of their way to help you, for free!!, you have no business demanding, insisting, or threatening a single thing.

Proper response would have been "Wow! Thanks!" and at worst "Please don't share what you saw, and thanks again."


> They want to know what data has been exposed

They should check their own logs instead of relaying on a 3rd party that may not tell the truth. This shows incompetence.


Because he was clearly trying to threaten him?


Obviously this person from the IT department has very little understanding of how computers work, and I'm not saying they should.

Each time a breach like this or in the original post happens, it makes me feel that our tools are just not there yet. If there were simple tools that caught vulnerabilities like this we would improve the standard of security.


> started grilling me about how many other plans I browsed

I think as soon as anything healthcare adjacent comes up most people will feel the need to get very nosey about what you accessed. It's possible they would have needed to file an incident (though, honestly, they should've regardless of what the reporter responded with) and gone through some procedure.

It's unfortunate the guy was a dick about it - but asking the extent of the data you accessed probably isn't unreasonable and may have been legally mandated.


Oh there are so many things like this. Ages ago, I used this to find a whole listing of internal fax numbers for a government org I wanted to get someone's attention at and totally slow-spammed them using a fax API. Got a couple of reads based off that.

There's no way I'm telling them I did that, haha!

Rule 1: Never tell people they're making a mistake unless you trust them to trust you.


Serious question: how do you figure out when this is the case?


Fear. The IT person is likely scared of (fill in the blank - blame, losing their job etc. )

They are scared because their leadership is likely also afraid - and so unable to provide protection by taking responsibility.

This is the vibe of an organization where mistakes lead to blame and punishment instead of quick resolution and learning.


I found a similar kind of problem at a bank, though the vulnerability was so simple I stumbled on it by accident. I promptly switched banks but was never brave enough to report it for fear I might wind up in a very bad situation.


What's bonkers is that _your own data_ was also accessible. Who's to say other users didn't get that data and choose to not report and kept the data?

Your own outrage to your data being exposed would have been perfectly reasonable.


its best to assume Responsible Disclosure™ is a psyop to find gullible people


Within an hour you say? That's incredibly fast. I'm impressed by that fact alone regardless of the quality of the response. I'd hae been shocked for within an hour email reply.

I would have thought using incrementing IDs in a URL was as beaten of a dead horse as sanitizing your strings in a SQL query. Then again, ACA websites behaved as lowest bidder was selected.


Similarly, a gov registration fee website simply disabled the “next” button at UI layer because I was late from the deadline. Easy bypass and paid fee, never heard anything else.


Good one.

A friend of mine bought a book online, that was just a link to a pdf in an S3 url.

I chopped off the /book.pdf part, and it was just in an S3 bucket with all the other books they sell.


I worked for a government contractor and I understand that behavior completely. The person you spoke with was tasked specifically with damage control. I am positive _somebody_ was grateful for your input, but those people aren't tasked with chatting on the phone. I know because I was dispatched for fixing and quantifying the scope of a similar issue, where a URL was allowing users to download treatment plans of other users. Being healthcare this is taken rather serious. While I was happy to fix the problem and grateful someone reported it, I was tasked with regularly reporting the progress of my work and scope of the breach throughout the incident. My only irk with the person who reported it was that they literally called the governor of the state after casually browsing hundreds of treatment plans, when they could've just called IT support. But yeah, I didn't talk to to them, a low-level IT lackey was given that task while I fixed the problem.

Oy vey, that was a mess though. Breaches happen, everyone knows it, even companies dealing with PHI that are beholden to crazy HIPAA fines. My report ended up conflicting with a bunch of dates a former supervisor, who at that point wasn't even involved in the department, had knowingly misrepresented to the state. After the fix was merged and I documented the whole scope of the breach, I go and look at the emails and reports on the matter. She's gone told the state all about the scope of the breach, misquoted release dates of the fixes, just minimized a bunch of things with which my report directly conflicted. This person who wasn't in our department anymore shouldn't have even been involved in the first place, yet here I am looking at publishing a report that'll land her in trouble. It put me in a difficult spot. I didn't want to get her in trouble and I thought about misrepresenting my own report. In the end I figured she made her bed, my report was the definitive statement on the matter and her emails were largely reactive so maybe they'd just forget what she said. It was, and they did.

The most important thing you need to do during a breach is be honest. On the other end be vocal and trust in the fact what you're doing is ultimately helpful. The government doesn't want to fine businesses. The only thing that'll end up screwing a company is if they're found to be negligent or dishonest. Negligence is easy to avoid because all you need to do is reasonably try to fix the problem once you've been made aware of it. Dishonesty on the other hand is a foot... that like a diaper-bound chubby baby, some people can't help shoving into their mouths. Don't throw IT under the bus though man, even if that guy on the phone was rude there were some good people on the matter. Some people just don't know how to act when they're caught up in a problem.


There is a 0% chance this story is true.


So his issue was not that you discovered the bug. His issue was that after discovering it, you went on to view a bunch of other people's data.

What you did was walk down the block, pull on the doors of random houses, and if you found one unlocked, went in and took a look around. If you found my door unlocked and left me a note, I would be grateful. If you went in and took a look around, then did it to all of my neighbors, we would have you arrested.

The bug here is an unlocked door. It being unlocked is a security risk, and people are thankful if you let them know. If after identifying the security risk you proceed to commit a crime, you're surprised people aren't "grateful?"

>difficult to hold yourself accountable

isn't it though...

>are malicious actors

so you.


This was not an "unlocked door".

This was going to the doctor's office, and while sitting in the room with your files, seeing a bunch of other patient files just left on the desk in eyesight.

Not in an unlocked filing cabinet, not in an envelope, but in the open.

Changing a URL is not "malicious use" nor is it considered doing something you're not supposed to.

As a web client, I should be able to change or manipulate the URL to my heart's content, it is 100% the server's job to restrict my access and make sure that I cannot access resources I shouldn't.

This is entirely the fault of the operators, not the user, and they were mad at them because they _allowed_ the user to access things they should not.


It's even worse than that. I think a better analogy would be that you've requested the doctor mail you your records and instead the doctor ships you his entire filing cabinet with your folder taped to the top and a note saying "read this one." (but no mention about why the filing cabinet is there too)

They weren't just in the open. A copy of these records were pushed, unsolicited, to the user's device and the user simply looked at what was sent to them.


> instead the doctor ships you his entire filing cabinet with your folder taped to the top

and as soon as you get this data and you read all that information sent to you by mistake instead of seeing it's not yours and stopping, you have committed a crime. what exactly is it that you don't understand here? what the op did is literally against the law.

>pushed

you should look up how http works. the request to get the data comes from the client browser. it's called a GET. the op requested to GET someone else's records from the server, after knowing the GET request he sent to the server would get him this information.

so again I ask - what is it that you don't GET here? The OP very literally committed a crime. a crime being very easy to commit, does not make it legal.


> and as soon as you get this data and you read all that information sent to you by mistake instead of seeing it's not yours and stopping, you have committed a crime.

There's no such crime. If you disagree, by all means cite a statute.

> you should look up how http works.

I'm intimately familiar with http. Upon issuing a request for your records (the request, GET or otherwise), you receive a response, pushed to you, with records you did not request.

I think you may want to re-read my comment, this time more carefully and thoughtfully.


"push" means the data is pushed. as in without a request from you. it's mind boggling how you are not getting this. exchange is an example of this - you get email pushed to a listener on your mail client, without requesting that data. if you use pop3 however, you request the data and receive a response. you are arguing a request - a GET - the literal opposite of a push, is a push. this is something anyone who has used email would know, so you are being purposely dense, and this conversation is done - I will not read your reply.

as far as the crime, it's called unauthorized access to a computer system, and many people are in jail for it. whether that system is password protected or not makes absolutely zero legal difference.


We don't have to continue the discussion but I'll wrap this up regardless for the peanut gallery.

As I've mentioned, my metaphor is request, response. This additional data is included, unsolicited, piggybacking on the response. I think this is clear.

Regarding the crime, no, this is completely incorrect. It sounds like you're referencing 18 USC § 1030. This law cannot apply whatsoever to this situation because there is no unauthorized access. The data was pushed, unsolicited, as part of an authorized access. It's being sent to all users when they use the system in a normal authorized fashion.

Viewing the data takes place on the user's own device, because the state itself put the information there. We are all authorized to access our own devices as much as we please.

The suggestion that the CFAA might apply here is nothing short of absurd.


But where are you getting your whole "piggybacking" idea from? The original story was that the user "verified" that he could get anyone's data by changing an integer in the URL. Typing a new URL into his browser makes it, in Web terms, a new request, not anything "piggybacking" on an old one.

So the data was pushed, very much solicited, as part of a new access. That the user's browser held an authorization (cookie?) for a previous access to the user's own data doesn't quite, AFAICS, mean that this new access to other data was also actually authorized.


> Changing a URL is not "malicious use"

the law disagrees

as far as your doctor's office strawman - it's a strawman. To see those files, you don't have to actively do anything, if they are left at the desk. Now, if you pick up one of those closed folders, open it, then start looking through it - you have an equivalent comparison. You also have an arrest record.

But don't argue with me. What he did is literally illegal.


> seeing a bunch of other patient files just left on the desk in eyesight

...and then proceeding to rifle through a bunch of those files to satisfy your curiosity.

Finding a vulnerability and reporting it -> Good

Continuing to exploit the vulnerability after you've found it just to satisfy your curiosity -> Bad


I think that's a PRETTY uncharitable analogy and interpretation of the OP's actions.

I would say it's more like:

You are walking down the street, and notice that there is a public noticeboard. It has a list of names, yours among them, associated with a number of steps each. It instructs you to walk a certain number of steps down the street, and then look up at the paper taped to the sidewalk that many steps down.

So, you do, and upon looking down, you see some personal information about yourself! You are a little perplexed, since this doesn't seem very secure. So you take one step back, and look down. Wow, yep, not very secure, there's information there too!

Being a human, you are naturally a little nosy and curious, and as these are publicly posted, after all, you glance through a couple more before finally regaining control of your better sense of civic duty, and report to the owner of the notice board that there is a problem with their "security".

I think this is a better analogy because:

* browsing to a web page is NOT the same thing as going into someone's house. * the internet is public. * there was CLEARLY no malicious intent. The OP clearly didn't harm or intend to harm anyone here, even if perhaps he should have immediately stopped when he began to suspect the website had a flaw and he shouldn't be able to see this information. I see no evidence of malice here.

I do agree that in general, just because a system responds 200 OK, you're not necessarily clear to do anything you want when when you're doing is obviously wrong. But at the same time, we should NOT be prosecuting or blaming people when they're able to access more than they're supposed to be able to PLAINLY due to the software's design insufficiencies and there's otherwise clearly no intent to cause harm.

We really need to take a more even-handed approach to this. And, we REALLY need some kind of a professional bar in software engineering. I would expect a student in their final year of CS to be able to produce a more secure system than what the OP described, so the fact that it exists in a quasi-government website is a complete fucking joke, if you'll pardon my language.


> You are walking down the street, and notice that there is a public noticeboard. It has a list of names, yours among them, associated with a number of steps each. It instructs you to walk a certain number of steps down the street, and then look up at the paper taped to the sidewalk that many steps down.

Or perhaps, "Here's a binder with numbered pages; turn to page 345 for your information." You wonder what's on page 346, so you turn the page, and lo and behold, someone else's information.


Which as I said would be perfectly fine. You then tell the owner of the binder that confidential information for other people is in the binder, and you get gratitude.

If once you find the information on page 346, you then keep flipping and looking at people's private information on the next hundred pages like the OP did, you have now committed a crime. The fact that you can easily access something, does not give you the right to access it. If you think otherwise, you think malware that steals your contact and banking info is legal. No, not the one that hacks into your computer. The solitaire game you download and install that has a trojan in it.

After all, you gave the solitaire game access to your hard drive to save and read its own games. Perfectly fine for it to scan the rest of your files. You gave it access to your network card so you can upload your scores. Perfectly fine for it to capture all other network traffic. All trojans are now legal as long as they're packaged with software you voluntarily install.


> If once you find the information on page 346, you then keep flipping and looking at people's private information on the next hundred pages like the OP did, you have now committed a crime.

I agree that morally, the guy should certainly not have continued to look through what he knew was private information he wasn't meant to have access to. I'm not sure the law sees a difference between looking at pages 347-350 and and looking at page 346 however.


I am sure the law sees the difference. Intent is what makes the difference between a murder charge and the death penalty, and supervision with a suspended sentence for manslaughter, when you hit a person driving drunk.

Page 346 was an accident - your intent was to read Your data. In viewing Further pages, as the OP stated for the explicit purpose of viewing other people's confidential medical data, the intent is a crime. It's the same thing as walking up to someone's desk in an office you're allowed to be in, and looking through their files.


You didn't know for sure that page 346 would have private data, and that you'd be able to turn the page; but there was a reasonable probability that you would in fact see private data. If someone died instead of having their privacy violated, it would certainly be classified as manslaughter (i.e., you were doing something you knew might be "dangerous") rather than just a plain accident where there was no fault.

I don't know what the actual law is, but given the benefits to society of "good people" reporting this kind of issue, I think that toying around with something like that should be considered not a crime at all, rather than being considered a lower-severity crime.


I never claimed page 346 was breaking the law. I claimed after you discover that the action reveals private data, and you repeat that action a bunch of times for the explicit purpose of getting more private data, you are now a criminal. This is what the OP very clearly stated he did.

He did not report the issue after finding it. He abused the security hole for his own benefit. He is a criminal.

Low severity? In civil court, they can take every penny he has, every penny he'll ever earn, and his house. In criminal court, he can be charged with unauthorized access to a computer system, one charge each time he did it. And he did it a lot, and they have logs. Which is all literally in his post.

Viewing other people's medical information is not a low severity crime btw.


It's a public website. If we have to use the doors analogy, these are doors at City Hall, not people's houses.


And it's a public street. It's what inside the houses (URLs) that is not public.


Doors (holes-become-walls / walls-become-holes) are for controlling whether things can go through. URLs are for letting things through.

A url is not a door, but an archway, or possibly a door frame.


..and you're not allowed to walk into someone's house if they only have a door frame instead of a locked door. they are for preventing criminals from forcefully breaking in. but you don't have to break in to commit the crime. that's why it's called "breaking and entering" - there are two criminal acts committed.


An arch is not something that merely isn’t locked, but something that isn’t meant to even be closed.


Yeah you still can't just walk into the Mayor's office just because it's unlocked. Access isn't authorization.


And yet if I do just open the door to the Mayor's office and it's unlocked and I wander in, that's still not the same sort of trespass as entering someone's home.

And, if I'm in City Hall, the mechanism that keeps me from entering the Mayor's office should be the security guards and key-cards, not my disinclination to open a door.


And if you walk in there, realize it's a restricted area, and start opening up file cabinets and reading confidential documents, you're now in jail. the security guard being there or not makes zero legal difference - things not being locked or blocked does not give you rights to them. thank you for proving yourself wrong.

you mistakenly wandering in is not illegal. however your strawman is not what the OP did. it's a paragraph of text for crying out loud. please at least read the story before commenting.


Asking the web server to give you information without lying or falsifying any of your request data should in no way equate to walking into random houses that are unlocked.


The proper analogy is-- you visit a public clerk and make a formal request via a form, receive the requested document from the clerk.

Then, while you're at the clerk's counter you notice a menu up high above, like at a fast food restaurant, listing random commands with no explanation. Curiously, you call one out to the clerk and see what happens. The clerk returns with a crushed can. You call out another. The clerk dumps a roll of pennies on the counter.

That's not fraud, it's negligent supervision and stupid design.


I guess the argument would be that changing the number in the URL is lying, as you are providing an ID that was not assigned to you

(Playing devil’s advocate here)


you are falsifying what customer you are. the guy literally said he put in different customer IDs into the URL after he discovered what part of the string was a customer ID. It would equate to a guy walking to your door, saying he's from the electric company, then reading the medical documents on your desk after you let him in.


This analogy isn't apt. What the OP did was the equivalent of asking, "Can you share these files with me?" and the other party going, "Sure, here they are!"


It's interesting you completely omit the part where he figures out the string in the URL that's a company's ID, and uses that to request a file. In your example it would be "I'm this other person, can you share my own files with me?" Except he's lying, and he's not the other person.

Tell me, what happens if you, heavyset_go, send an invoice to Apple, and the invoice says you're "Cisco" and they pay it. Do you get to keep the money, or does the prison get to keep you?


> It's interesting you completely omit the part where he figures out the string in the URL that's a company's ID, and uses that to request a file. In your example it would be "I'm this other person, can you share my own files with me?

The OP was already authorized and authenticated on their own company account. They never falsified their authorization or their identity, they just requested documents at a specific URL and the other party had no problem replying with said documents.


"malicious actors" is quite a strong statement, for someone who was not really aiming to harm anyone or get much personal gain from it.


There's too much moralising and too much metaphor here.

It's really a lot more simple:

> After I shopped a few companies to see how our plans compared

This isn't white-hat, it's grey-hat at best. Found the vuln, and then used it.

I don't agree with the dramatic reading that I'm responding to.


Wow. The parent comment did not state they then sifted around for personal data. They checked if there was a bug and found it. For all we know the personal data is front and center, so this rudimentary check also revealed personal information. It’s not like they said they downloaded the SSNs. Good job a miming the ignorance and bad faith of the nameless bureaucrat the parent comment mentioned though, maybe this is just satire and I’m missing it..


> After I shopped a few other companies to see how our plans compared...

You might have missed this part. I did, too, on first reading. They did sift around.


Wow. You are missing it. You are missing where they explicitly stated they sifted around for personal data, numerous times, and you are missing that sifting being what the company was complaining about. Personal as in insurance plans other people have. He explicitly states he did that after he found the bug. You're missing reading most of the post actually.

If that company decided to file charges against him, this HN post is an admission of guilt for a crime.


You seem to be implying that accessing a competitors pricing is immoral. Do you think a company pricing should be private information in the same sense that your house is private ?


When you went to 342 you were white hat. When you went to 343 you became black hat.


I don't think you're coming out of this looking too great, either. After finding the vulnerability, you then exploited it to gain an advantage, in addition to reporting it.


I don't know, that sounds like a pretty valid response given that you "shopped a few other companies to see how our plans compared".


If I ask you to show me a document, and you willingly show me the document, who exactly is responsible for the disclosure?


In real life, if you do it under false pretenses, you are. In this analogy the real-world version would be considered fraud.


Asking for the next file isn't false pretenses. I don't know if this analogy works quite right. Even rifling through a file cabinet wouldn't be false pretenses, it would be something else.

And you have to cause injury for it to be fraud. Is "Help I was too honest to a customer." a valid injury claim?


The closest real-life equivalent to asking a computer server for a document and getting it is asking a human server (e.g. office clerk, archivist) for a document and getting it. If I go to the IRS to do some paperwork and notice it says "File #7881991" in the top right corner and I go to the clerk and ask them "Hey, can I have files 7881992 and 7881993, too?" and they give them to me, who is liable for that? It's quite obvious.


This is 100% the correct analogy.


But this is assuming that the server has more agency than it does. Servers don't have minds and they don't make authorization decisions. This is more like someone giving you key to a filing cabinet in order to retrieve some documents and while you're there you snoop on the ones next to yours.

Is this system more trusting of people than it should be? Probably. Does that mean you're allowed to snoop on other people's documents -- nope.


The humans who administer the server have agency. They went and purchased an apparatus for publishing information to the world. They connected it to the world. They pointed it at that information. They turned it on.

A printing press also isn’t sentient and can’t guess whether its operators really mean to share every sentence on the plate. But browsers and readers of printed materials (that are left in public places) have no obligations to the publisher’s state of mind. Why should browsers of digital materials?


> This is more like someone giving you key to a filing cabinet in order to retrieve some documents

No. It's like someone asking you what you need, you telling them "I want all my documents and the ones from my neighbours because I feel like it", and them proceeding to hand you everything you asked for neatly collected in a folder.


You’re still ascribing agency and authority to a fancy vending machine. The server has absolutely zero authority to grant you authorization to the documents. It can only grant you access. The servers are not representatives of the government or the site-owners, they are just machines. And just because the vending machine is broken and works without you paying doesn’t make it not stealing.


The fact that the server cannot make decisions that were not predetermined is exactly why the responsibility for its behaviour lies with the people running it. They make the rules, they are the ones whose job it is to read the manual. And when someone makes a technically valid request (instead of, say, SQL injection attacks) it's not the user's fault for an incorrect response. They might not even be aware that they're not allowed to do a specific request: it's reasonable to assume IDs in the URL are not sensitive information, as URLs are public and unprotected by default.

Of course it's on the user if they know they're not supposed to have access to some info and they use it to their advantage regardless. If they're a nice person they'll even report the issue (though less likely after news like this).

> just because the vending machine is broken and works without you paying doesn’t make it not stealing

So if it's broken and doesn't work despite me paying, does that make my payment a donation? No. Though it probably is theft if I knowingly abuse the error for profit.


I feel like I'm taking crazy pills here. We're specifically talking about someone who knew that they weren't supposed to access other business' data and did purposefully for their own gain. How is that not abusing the error for profit?

Like you can say "URLs aren't sensitive by default" up until the guy admits that he knows it's an error and he's accessing the private data he's not supposed to see. That changes the situation completely.


Right. The server is not liable. The people who set up the server to serve application data for every client to any client is.

Just like the IRS admin assistant in the example was, the agent to cause the transfer. The filing cabinet/server is not the agent, simply the repository responding to the system and practices in place.


But this is assuming that the server has more agency than it does.

No, it merely assumes the server is acting on authority of the organization identified by the domain name. It doesn't assume agency, only representation.


Which also seems nuts. Like they’re servers. How anyone assumes that some Ruby code can be acting as an authoritative representative of the government is silly.


Yes, how could anyone assume that the ATM down the street can be acting as an authoritative representative of your bank when you insert your card? That's just silly.

/s


But that’s exactly right! It’s not. If the machine has a bug and reports the wrong balance or gives you too much or not enough money on withdrawals it’s explicitly not authoritative and you can get it corrected by an actual representative of the bank.


If human in government office has a bug and reports wrong result when queried, it can be corrected by their higher-up.


If you give me the key to the files and don’t explicitly forbid me then it certainly does mean I’m “allowed” to look at the documents. You literally and explicitly just allowed me to do so by granting me access.


No, it's not, because computers and humans are not the same. A computer might give away too much information because someone misconfigured it. The closet human analog to that would be if the human was improperly trained in what information they're supposed to give out. But the human also has other options: they could be tricked into giving out more information than they should, or they could be giving out more information because they're being paid off or given some other benefit.

You can certainly assign various levels of blame and responsibility to the human "server" in those scenarios. But the human on the other side of the interaction, the one requesting information, doesn't magically become free of reproach. If they are requesting information they know they should not have access to, and then making use of that information for their own gain, they're guilty too.

There's a very narrow carve-out for the white-hat: requesting information with the intent of uncovering vulnerabilities, with the intent to help them get fixed. We expect a white-hat actor here to destroy and not make use of any information they obtain that they shouldn't have.

> If I go to the IRS to do some paperwork and notice it says "File #7881991" in the top right corner and I go to the clerk and ask them "Hey, can I have files 7881992 and 7881993, too?" and they give them to me, who is liable for that? It's quite obvious.

Yes, it is obvious: the clerk is liable for giving you something they shouldn't have, and you are liable for fraudulently representing yourself as someone who should have access to those files.

I don't get where this idea of "the other person let me do the crime, so the crime is ok" comes from. That's just not how the law works in the real world. If you then walked out of the IRS office with those files, I would absolutely expect you to get arrested. (Even if you immediately gave the files back, you'd probably be on shaky legal ground.)


> Yes, it is obvious: the clerk is liable for giving you something they shouldn't have, and you are liable for fraudulently representing yourself as someone who should have access to those files.

It's always okay to ask for things. There would be no way for society to adapt, progress, or change if people were limited to only asking for things that they knew in advance they were allowed to have. If it's legal for a telemarketer, pollster, reporter, cop, or recruiter to contact me and ask me questions then it's just as legal for me to contact and ask a web server a question. The correct response to unauthorized requests is a 4xx, not a lawsuit.

More to the point, what makes it okay to ask a new web server for "/" without permission? Even if browse-through terms of service were legally enforceable they aren't known to the user or the browser before making the first connection and request.

If a web server doesn't want to answer questions then don't connect it to the Internet.


It is the intent of the act, not the act itself, that is important.

If you know doing x will cause y, then when you do x you are doing y and you are responsible for the consequences of doing y. It doesn't matter what x was.

This is especially true in the real world.


I think misdirected mail might be a better analogy. My understanding is that, even if it is delivered to your mailbox, it is still a felony (in the US) to open mail that is not addressed to you.


Users don't normally construct urls by hand. Wouldn't the equivalent more be like:

You filled out some form to request a document from the irs. You give the form to the person they give you the document.

You notice they dont check ids, so you change the name on the form, and get someone else's document.

This definitely seems to fit the definition of fraud:

380 (1) Every one who, by deceit, falsehood or other fraudulent means, whether or not it is a false pretence within the meaning of this Act, defrauds the public or any person, whether ascertained or not, of any property, money or valuable security or any service [that's the canada definition]


But... they didn't change their name on the form. They literally just said "I'm still me, but I want this other file now, please."

All company data was, in OPs scenario, made public to any and all authenticated users.

There is no way to rationally spin this as a malicious act, in my view.


I don't think simply changing the ID in the URL to see what would happen is itself a malicious act. But, after discovering the vulnerability, OP admitted to continuing to exploit the vulnerability so they could make use of the information they'd gotten, information that they should not have access to. That part of it is actively malicious.


No one is claiming "I'm still me, but I want this other file now, please." is a malicious act.

Downloading a number of them and comparing information, however, is not necessarily malicious but rather sketchy.


Well they changed an id number. I guess the real life version would be changing the SSN number on the form.


An ssn is considered private info, the plan number wouldn't be.


"deceit, falsehood or other fraudulent means" => editing the URL is neither of those. Forgig a cookie for access is, just like randomly trying passwords and usernames.

The closest real life example I can think of would be along the lines of: - your car is in a public parking space and someone look inside vs - the same car is in the garrage and someone breaks the door to look inside your car


> Users don't normally construct urls by hand

You never typed google.com into the browser? I doubt it. Maybe you just mean "construct" as in edit the url to access another site - well, that's still a perfectly normal use-case. I regularly change reddit urls to old.reddit because it gives me a better user interface. Or access a subreddit by adding an "r/subname". Sure, those aren't alphanumeric IDs, but that distinction is meaningless. Some unique IDs on the web do actually consist exclusively of english words. And some numeric IDs are harmless page numbers or pagination info.


I don't think changing the name is a fair comparison.

This definition of fraud doesn't define the word "defraud"? I don't know how I'm supposed to see if it fits or not.

It can't mean any action, or going into a store, lying about my name, and asking what aisle has baked beans would fit. Because that has "deceit" and "any service".

If I interpret things as the service being minimal and provided for free, so that I'm not deceptively getting the service, then we have to look at what actually gets sent to me, and whether it's "property, money or valuable security". And since it's just a copy of the data sent at no cost, it's much harder to argue fraud exists.


The data in this case clearly had value; OP admitted to continuing to change numbers in the URL to get more information about what plans other companies were signing up for, because that information was valuable to them.


You're assuming the "because that information was valuable to them" part. Or you're using such a broad definition of valuable that would also make this comment thread valuable because I have refreshed it multiple times.

While you could construct hypotheticals where OP is using the health plan information to gain actual value, they are all so far-fetched I wouldn't buy them as a fictional plotline. Dude was probably just curious.


A closer analogy would be that you keep the name as your name, but change the # of the document you're requesting. It's the IRS's job to ensure you're allowed to retrieve that doc.


Sure, but I guarantee you that if the IRS screwed up and gave you the other doc, and you made use of that information (rather than immediately turning around and saying "um, IRS, I think you made a mistake; this doc doesn't belong to me"), you'd be in trouble as well.


Haha that's fair.


I think the analogy would be going up to the desk and saying: my id number is X (when its really Y), can i have my file.

If you convince them that you really are X and they give you the file, i think that would be considerd fraudulent. Whether or not an injury takes place to raise it to the level of fraud i guess depends on what was in the file, but in countries with strong privacy laws, someone would probably be in a heap of trouble.


Except that's not at all what they did - they simply accessed files that had been made public by the service provider.

To be able to login as BoBibbidyFooBar, and subsequently access ANY company's info in the system without changing their identity from BoBibbidyFooBar does not, in any way, constitute any sort of fraud. It literally cannot, by any sensible definition.


Intent matters. The service provider clearly did not intend that the files should be public. They screwed up, and they should take responsibility for that. But that doesn't make it ok to know about the security issue and download as many documents as you can in order to use them for your own purposes. Perhaps that wouldn't be "fraud" based on whatever definition you're using, but it's clearly unethical and immoral, and IMO hopefully illegal as well.


> I think the analogy would be going up to the desk and saying: my id number is X (when its really Y), can i have my file.

Not at all because what you describe involves impersonating someone else.

In the OP case, they were authenticated in the session as themselves and always acted under the truthful identity and asked for a document and access was granted.

So the analogy would be going up to the desk and saying: I'm John Doe, my id number is X (truthful value), could I see file ABC? And the attendant checks that id==X does have access to document ABC, and thus hands it over.


Nope, no way. Your analogy is wrong.

A better analogy would you asking for your files, and then the secretary taking you to a filing cabinet containing everyone's files right there with yours. You don't have to lie about who you are, you can just look at other files because they're right there in the place that you were just given access to.


How is that analogy wrong? Both in terms of the technical implementation and the subjective user experience, you're making separate requests for a document each time.

Analogies are always going to be imperfect, but I can't see the argument that the "separate request" analogy is any worse than yours, let alone "wrong".


And even in that case you're still not allowed to look at other people's documents. Like it doesn't matter that they're right in front of you, you still haven't been given authorization.


But they didn't do that. They just asked for a different file, not misrepresenting their identity.


He had already given his correct details to be able to view plans. It’s like calling the cops to get your accident report then asking for the next higher numbers and they give it to you.


Not sure I see how. More like the records office decided that, rather than staffing the front desk to handle records requests, they instead just dumped an unlocked filing cabinet into an alcove off the hallway with an arrow pointing to it labelled "Health Care Plans". Essentially identical to blaming users for finding an unsecured S3 bucket or MongoDB instance: it's on the operator to secure the data.


> Essentially identical to blaming users for finding an unsecured S3 bucket or MongoDB instance

I agree that it's unreasonable to blame users for finding things like that. But if those same users are downloading all the data and making use of it for their own purposes, that's not ok. Finding a vulnerability and reporting it is an admirable thing to do; exploiting that vulnerability yourself is not.


It is more like the records office decide that, but didn't tell the people who they were holding records for that they didn't feel like staffing the desk. The records office is of course 99% to blame for their incompetence here, but it is still a bummer for the people who trusted them, and better not to look.


In our version though the system can require you to show whatever ID or authentication the designer decides so how can any process as simple as changing an ID in the URL be fraudulent. In this example the person who browsed other plans either wasn’t asked for any ID or the person fetching the documents didn’t check authorization. Either one is negligence on the department/sites side.


> In real life, if you do it under false pretenses, you are.

Sure, but how is that relevant? What material false representation was made which was relied on in deciding to provide the data?


Because servers don't decide anything. They're autonomous systems imperfectly carrying out the will of humans who make the actual authorization decisions. If a computer system erroneously prints an extra 0 on a check mailed out to you that doesn't mean you get to keep the money because the computer isn't the entity that decides how much money you're owed.


> Because servers don't decide anything.

If there was no decision, much less one based on materially false information, there can be no charge related to false pretenses. Your argument against decisionmaking is an argument against your claim of false pretenses.

> If a computer system erroneously prints an extra 0 on a check mailed out to you that doesn't mean you get to keep the money because the computer isn't the entity that decides how much money you're owed.

That's neither entirely true nor at all relevant to your false pretenses claim.


Accessing data that you are not authorized to view is still wrong. The fact that someone has misconfigured the access controls doesn't change that.

I might forget to lock my front door one day, but that doesn't make it ok for you to wander into my house and look at all my stuff.


Well in this case I'm knocking on your door and you're opening the door saying "Come right on in!"

Requesting access (ie knocking on a door/typing a url) is not illegal. If you grant that request (ie invite me in/serving a webpage), I am under no obligation to psychically infer that you didn't mean to and refuse your invitation.


Unfortunately, it's never that simple. So much of it is about intent.

If I could simply use the excuse "well, the computer gave me the information", then there would be no such thing as hacking. It's always a case of the computer sending the information to you.


It's not about intent, it's about authority. If I have the authority to access something, it's legal for me to access it, regardless of my intent. I may be breaking other laws depending on what my intent is, but it's not hacking.

Compare to a restaurant: simply walking into a restaurant is not illegal, but an owner can restrict access and ban someone from their restaurant. It takes no technical skill to break into the restaurant, the door is wide open, but without authority it is trespassing. However, it is on the owner of the restaurant to actually ban someone. For a public space, be it a restaurant or a webpage, by default you are permitted access. Attempting to enter a restaurant you've never been to before is not breaking and entering, nor is accessing a URL hacking.

If a website has some user agreement saying you will not access certain portions, or even if there is just a notice on a website saying this site is not public, then they have done all they need to do to revoke someone's authority, even though they would be incredibly easy to "hack." But as laid out under Van Buren v US, you don't lose authority to access things simply because you possess some intent undesirable to the owner. If you invite me into your home and I sleep with your wife, I haven't trespassed; if you tell me to get out and I don't leave then I have.

Further, there's a distinction between accessing something by normal, legal means and accessing something by other methods. For example if you invite me into your home only after I give you a false identity, I'm trespassing because I was never legitimately given authority to enter. Likewise if you hack a system with say a stolen password, you don't have authority to access the system no matter how easy it was. But if you grant authority to someone without them having to do anything nefarious, then they have authority regardless of whether you should have done it or not. If you have something sensitive, don't put it in a place (in the real world or online) where authority to access is granted automatically and without oversight.


If I send a HTTP request, and the server -who I believe is acting on behalf of the publishing party- sends a 200 OK response along with the data, how am I to conclude I wasn't authorized? Since when is authorization the client's responsibility?


Yep.

Send me a 401 (or a 403) status and I’ll know I’m not authorised.

In the physical world, nobody would lawyer up and go to court if someone walked through an open door with a sign saying “public entry here” and saw something confidential.

If you have confidential information around in the physical world, you make sure you have facilities staff who know the difference between “public entry here” signs and “authorised personnel only” signs. You also have facilities staff who know how to fit door locks and door closers, and security staff who know how to choose appropriate locks and to enforce compliance of locking doors. And if all that breaks down, it’s not Joe Concerned-Citizen who tells you about it, or even Mallory from your competitor who waltzes out with trade secrets who gets held to account, it’s the manager and/or executive in charge of facilities and security who’d be answering the difficult questions, probably with their lawyer at their side.

It sad that the legal system hasn’t yet started to hold people to account for having incompetent web developers and server operators.


If you make a library open to the public but then get upset they are reading the books, who is in the wrong here?


I generally agree with this, but there is more nuance involved- like what if the library has a sign that says "Keep out"? Does the trespasser then bear some responsibility? i.e. Being served a 403, then appending some URL param that grants access. I wouldn't call this hacking, but it's something else- like "Digital trespassing", after all the 403 is a sign, not a cop. All of this to say The Simpsons did it.


> Accessing data that you are not authorized to view is still wrong.

So if a piece of paper flies in my face and has company secrets and I manage to look at, I'm at fault here ?

> I might forget to lock my front door one day, but that doesn't make it ok

Sorry but if you're not going to secure your belongings, then expect to be robbed.

Being 'ok' has nothing to do with it.


> Sorry but if you're not going to secure your belongings, then expect to be robbed.

It’s not even “getting robbed” really. Nobody here deprived the owner of anything. It’s more like:

Sorry but if you're not going to secure your belongings, then expect to have people look at your stuff.


A public web service is not the threshold of your home. If you want to make a domestic analogy, it's the box you drop off at Goodwill. You put something in there that you didn't mean to, and you understandably feel violated now that people are browsing it on Goodwill's shelves, but you can hardly blame the shoppers for that.


It's not the point. Of course they built stupidly insecure system, and of course sending people to jail for finding out such holes is wrong, but on the other hand ethical person should stop their access to personal data which they are not supposed to see after confirming that vulnerability exists and not make copies of said data.


Because you can do a thing does not mean you should do a thing.

If the security system is broken and you do exactly what it should be preventing, then you report it and get upset because they ask questions about you doing exactly what you did?


Say you are invited to your friends apartment in an apartment building, but none of the apartments have locks. So you decide to open up some other random apartments and look through their things, who is responsible?


Analogies are never helpful for things like this.

We don't need to reach for analogies to observe that while the theoretical ideal is to report it after just one false access, that no significant damage was done by accessing just a few more via human manipulation of the browser URL, with no recording or sharing of the results. From a human perspective, no damage was done.

Whether that legally crosses a line involves a whole lot of details that few, if any people here, will be able to speak to, because of the complication of the law, and HN's conclusion as to the legality is of marginal interest even if someone competent were to give an opinion.

We can speak to the fact that even if it does technically cross a line, a prosecutor really ought to use their discretion to not prosecute since nobody was hurt. We can say that because that's just an opinion. I expect we don't have very many people here who actually want the book thrown here (though, as always, enough read this that it's probably non-zero).


I don't think quantifiable significant damage should be the bar we use, though that should act to moderate the consequences.

OP admitted to continue changing URLs in order to check out what plans other companies were getting and what they cost. That means OP downloaded lists of employee names, ages, SSNs, and other data. If I were an employee at one of these other companies, I'd be pissed at OP for that. I'd be even more pissed at the people who built the marketplace website for making the rookie security mistake that allowed it, but it's absolutely not ok to download other people's information when you shouldn't have access to it, and use that to your own advantage.

Sure, I don't think this is something that should be prosecuted as a CFAA violation with big fines and jail time. That's not a proportionate response. But I also don't think we should signal that it's ok to look at (and use!) other people's data just because someone else forgot to lock it up properly. I think, for example, something on the level of a parking ticket would be appropriate here.

If OP had changed the URL once, found the vulnerability, and then immediately closed the page and reported the problem, I would see nothing bad in what they did. But they didn't merely do that, and IMO crossed the line in their subsequent actions.


There's no evidence from the original comment that anyone invoked any legal lines. Instead, they seem to be upset that the person they reported the incident to asked them questions about exactly what they did rather than being effusively grateful.


I added it, anticipating future comments.


That's not even close to the same analogy though. This would be like knocking on the door, asking if you can come in, and the person living there letting you in. Then getting mad about it later even though they let you in.


More like your friend let you into their apartment but then got upset that you went into the dining room when they only intended for you to go into the living room.


No, this is more like if you asked the landlord to let you in, and then they did, without the permission of the tenant. The tenant would completely be within their rights to be angry about that. Both at you and the landlord.


I think that's a valid response if the person letting you in wasn't expecting you and didn't want you there. Like, what are you doing knocking on random doors and going into random places just to look around? That's not honest behavior. Honest behavior is that if you know you're not supposed to have access to a thing, you shouldn't obtain access to the thing even if you technically can. I think it's pretty clear that you shouldn't have access to another company's healthcare plans. The first one is a mistake, maybe. The subsequent browsing and comparison shopping of restricted materials is definitely not okay though, and the harsh, suspicious response was warranted.


>if the person letting you in wasn't expecting you and didn't want you there.

Then they shouldn't have let you in. How are you completely absolving them of responsibility when all they had to do was say "Who the hell are you? No, you can't come in."


Well, to go with the analogy more: I leave my door unlocked because I'm expecting someone. There's a knock at my door and I yell "Come in" without looking at who is at the door. Not an unreasonable thing, happens all the time. When I finally look, I find you in my house, going through all of my things, for no reason other than you wanted to gain insight on my financial situation.

Do I bear responsibility for letting you in? Yes. Should you be there? No. Should you have knocked on the door? No. Should you have tried the same at my neighbor's house and every house on my block? No. In this metaphor and in the original context, everyone is acting with honest intent except the actor knowingly trying to access obviously confidential documents.


It doesn't mean I am there illegally though. Maybe I am there for some other reason and I thought you wanted to to let me in.


No one said anything about legality. I'm still going to yell at you to gtfo and never come back again, and I don't see why it would be surprising that I would.

Let's drop the metaphor. The original story was that someone accessed a number of documents they weren't supposed to but technically could, and the question was whether or not that it was reasonable that the owners of the documents were upset with that.

I argue there was good reason to be upset given the facts on the ground. In this particular situation, the original poster was there to access their own document. Having accessed someone else's document, that would be the point at which the behavior crosses from legitimate to illegitimate if it continues. Leaving at that point would be one appropriate response. But systematically going through a number of different documents goes beyond a mistake and into the realm of intentionally exploiting this security issue for unauthorized purposes. That's when it crosses from "honest mistake" to "dishonest exploitation".

I have no idea about the illegality of the issue. But the fact is plain that this person was not the intended recipient of the documents, they knew they weren't the intended recipient, and then after realizing the nature of the exploit, they continued to use it.

This is not the same as knocking on a door for a legitimate reason, being let in, and then the person inside being mad you're there. It's knocking on a door for no reason or a malicious reason, knowingly doing something inside the resident doesn't want you to do, and then wondering why they are mad at you.


The only person to be upset at is the one who didn't put access control on the site. That was a publically available endpoint. The better analogy is putting something private on a public bulletin board and being mad if someone read something you didn't want them to.


A billboard is a broadcast message though, whereas an HTTP request is more like a back and forth exchange between two participants. So I think the original knock->response->enter is a better metaphor.


You let me in knowing exactly who I was. You showed me some stuff I wanted to see, but sitting right next to it, out in the open, was stuff you didn't want me to see. All I had to do was look somewhere other than where you were pointing, and I did that. And then you got mad at me for looking at the stuff and called the police.


> All I had to do was look somewhere other than where you were pointing, and I did that.

The way you phrase this makes it seem like accessing the documents was a mistake. Maybe the first one was, but I think the thing you are missing about the OP's story is that the behavior was repeated. I think the first instance was arguably okay. But subsequent access with the knowledge that what they were accessing was not intended for them is in my eyes beyond a mere misunderstanding.

You also have to remember that having physical or digital access to a thing is not the same as having permission to view the thing. For example, if a "Top Secret" document is delivered to your house with your name and address attached to it, if you read it without the appropriate clearance you will still be in trouble. The legality of such a thing is well established in that case, but the principle is the same: even though you have access to a thing and all you have to do is move your eyes in some direction to see it, the act of seeing it is still at minimum an ethical breach (why are you looking at things that you know don't belong to you?).

I guess this is the fundamental philosophical and ethical question: do you believe you are entitled to know any information as long as you have the technical ability to physically or digitally access that information? What if I have medical records on a screen in a room you are in, and all you have to do is move your eyes over to see my most personal info? Are you entitled to read that information because it's visible to you? Or do you think you owe it to others not breach their privacy even though you have the ability to do so? Would you be mad if someone violated your privacy, and then retorted with "well you should have a had implemented some better technology to prevent me from moving my eyes in that direction"? I guess in that scenario you would have to blame yourself and your technological abilities, and not the person violating your privacy.


I was thinking of a similar analogy but I don't think it holds.

The right analogy would be if I was in the apartment complex and I said to a door not mine "I'm home open up!" If the door opened and I did it intentionally, am I liable?

I still feel like yes but since you have to request the document and receive it I think it's different than just checking locks.


I think we're all gronw-ups here and don't need analogies here.


People of all ages suffer from confirmation bias. Analogies can be useful because they allow someone to appreciate the logic of an argument while temporarily dissociating from strongly-held opinions. After the framing moves back to the question under debate, the logic might stick. At least all parties might understand everyone’s perspective better after a few analogies are exchanged.


The analogies in this thread are mostly only furthering confirmation bias.

Because any physical analogy is such a poor representation of how a website actually works, everyone just cherry-picks the analogy that demonstrates the logic they believe should apply, and then tries to constrain the argument to that logic via analogy.


Not if everyone constantly shifts the analogy so their argument still works ;)


Indeed -- it is like if arguments were things to transport, and analogies were cars... wait, no, they are railroad cars.

So the argument is a heist occurring on a train, so we've got the thing that we're trying to heist (which would be our point) and then we're shifting it from one car to another. And some of the analogies here are clearly like passenger coaches, but others are more like those... coal transporting car, whatever they are called... and at some point we move to the inappropriate railroad car and drop the point in the coal which obscures it.

Anyway, the point is that at some point you really just hope that some conventional train robbers will show up and derail the whole thing because it has gotten too convoluted to follow.


A closer analogy might be if none of the apartments had doors, would you be allowed to step inside.


the web isn't a collection of personal apartments


I think in this example both are equally responsible:

1. People who kept their doors unlocked

2. Person who randomly entered doors & found things.

We need to take care of security of our properties, though stealing is wrong.


Nope, opening an unlocked door is still considered break&enter. AFAIK, the "unlocked door" can even be a beaded curtain. Turns out that the legal definition of "break" in this context is extremely old and doesn't correspond to lay usage anymore.

But I think that a better analogy would be asking the apartment manager to see your payment history and getting handed the entire apartment building's ledger.


More like - you go to supermarket bathroom, checking each stall and find one person is pooping without doors locked


Being wary of the guy, sure. But it's a terrible response in general. The correct response is to take the site down! Monitoring IP addresses? Really?

First, it's trivial to just use a different IP address. Second, even if you could track people perfectly, which you can't, who the hell thinks it's okay for data to get leaked as long as you know who it gets leaked to?


It’s not a nice response, but IT needs to be able to answer questions about the extent of a given breach (what info was accessed by whom and when). This is a legal requirement in the case of health information. Ideally people could be courteous while fulfilling their legal obligations, but IT folks aren’t generally chosen for their public relations or customer service skills.


Yes, and they need to do that based on the forensic data available to them, even if the answer is “we don’t know, it could be everything.”. Asking the person who caused the breach to explain the extent of your data loss is not an acceptable, or reliable, practice.


I don’t expect that it is sufficient, but it probably gives the IT person something to tell their boss in the short term: “We’ll verify, but he says he only accessed X”.


If he can monitor ip addresses to make sure this guy isn't browsing anymore, then he should be able to check those same logs to answer his own question. If you want people that have zero obligation to help you then you should probably be nice to them. The nefarious criminal isn't going to report things like this to you.


I already agreed that this doesn’t warrant unkindness.


Assessing the scope of the breach, sure. "Fixing" the breach by monitoring a single IP addresses access patterns not so much. The site needed to be taken down till a mitigation has been deployed.


Agreed.


In those situations you get a third-party in for forensics, you don't typically ask the people who breached how large the breach is (why would you take them at their word anyway? aren't they incentivized to downplay, etc).


Vehemently agree. The response demonstrates, if nothing else, the lack of an appropriate Incident Response Plan. A competent legal team would not vet and approve such a response, instead redirecting it through the appropriate channels if they felt the need to respond directly.


> After I shopped a few other companies to see how our plans compared

Yeah once you start using a vulnerability maliciously to obtain confidential data for your own personal gain, even if its a stupid vulnerability, you're not really good-guy security researcher anymore.

If all you did was the bare minimum to demonstrate the vuln exists, that's cool. If after you do that you continue to use it to obtain confidential info for your own gain or curiosity, that's not so cool.

> Perhaps it's more difficult to hold yourself accountable than it is to assume that others who've found your shoddy work are malicious actors.

You literally just admited to being a malicious actor in the paragraph above.


Language cheapens itself when spoken cheaply. Abusing over the top terminology on minute areas of controversy will ultimately lesson the impact of your outrage when something actually bad comes along. Someone browsing healthcare plans available to other employees of different companies is not something that should win you the label “malicious actor” and come associated with other implications. This data leaking harms literally nobody other than perhaps the company offering the worst coverage to its employees. Your response is the real problem here: If I had done this, reported it, and then been called a “malicious actor” on a forum titled “Hacker News” my knee jerk response would just be to shut up about it next time.


I used the word "malicious". Its not like i used the word "murderer" or "evil overlord". I'm not saying OP should go to jail or anything.

All i'm saying is if you find an exploit, and after you verify it works, you contunue to use it for your own personal ends, you're no longer benign and you shouldn't expect a warm welcome from the security team.

The line is when you start to use exploits on computers not owned by yourself for your own ends instead of for the purpose of verifying and reporting the vuln. Sure you could cross that line a little bit or a lot, but you're not innocent if you're over it.


> you contunue to use it for your own personal ends

I think this is what people may have been missing from your original post: at some point things can go from innocent to malicious.

"Crime of convenience" is the most common type, after all.

"I'm not the type to steal, but the cash was left on the counter, and …"


> ... and you shouldn't expect a warm welcome from the security team.

The appropriate response from the security team (after verification) is to pull the site down or immediately patch the vulnerability, if possible. Making an outbound call to a third-party is pointless and irresponsible.


I imagine having an assertion that the person didn't keep any of the data might be important to legal. (Ianal)


Browsing the different plans is not malicious. Jesus.

And the details of different plans is not the kind of confidential info that innately deserves protection. Investigating or recording personal information would be bad, but they didn't do that.


Exactly... for them to "benefit", they would have to:

Apply for jobs at the other companies with better plans, proceed with interviews, offers and then finally accept one and quit their job at their current employer... To reap the rewards of their malicious hacking...


More directly, they as employees could pressure their bosses to renogtiate the insurance contract.


It wasn't just plan details though... They accessed names, SSNs, etc.


Not on purpose, and they didn't keep or memorize it.


The GP comment says "After I shopped a few other companies to see how our plans compared". That sounds pretty "on purpose" to me.

What does "keep or memorize" have to do with anything? They intentionally abused a misconfiguration to view private information.

I think it's reasonable to disagree about the ethics of that, but I don't think it's really debatable that it was intentional.


They were intentionally viewing plan information. The personal information wasn't the goal and wasn't retained.

"private" information is too vague of a term.


> malicious actor

malice implies intent. If we take author at their word, there wasn't any, though you could say they took it too far by looking at other stuff they probably knew it was ethically wrong to do so.

Though, sometimes it isn't clear you're in compromising territory until you're in it.

If any of the confidential information obtained wrongly gets used to advantage … that's malice.

If the parent set out to exploit the insurer by finding inconsistent/unfair pricing, etc etc … that's malice.


Hmm. You make a good point. Fair enough.


You lost me at "maliciously".

What harm was done by someone comparing prices? What organization lost money? Who got worse health service?

"Unethical" and malicious is the current, profit-driven health insurance system.

I know you're coming at it from an absolutist perspective, but I disagree entirely with passing judgement.

Furthermore, the fact that you seem more upset with the person who glanced at a few plan prices rather than at the healthcare system, or the incompetent website operators, is telling.


I definitely agree that this is not a big ethical breach in terms of magnitude, but it is still better not to look. Apparently this is not intended to be public information. If this information is private, I guess the companies want to derive some (slight) competitive advantage from not sharing it. I think you could make a strong argument that companies should make their healthcare offerings public knowledge, but they aren't currently (I guess?). In any case, access should be granted on the basis of an even playing field.


>What harm was done by someone comparing prices?

It removes the information asymmetry, which protect the profits of the seller.


Exactly. So, no harm to any real people.


If you accessed my medical records, nobody would be “harmed” as they are fairly normal. It would still be wrong.


Because it would be a privacy issue. But that assumes they're looking at your information on purpose, and not just some price tags.


Oh no, not the heckin' confidential insurance negotiations! What's the worst that can happen by those being exposed?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: