I used to work at a large public university. One day, a grad student brought me his laptop and asked if I would take a look at it because "the Internet [was] really slow." It turned out that his computer was part of a botnet controlled via IRC, and it was being used to attack hosts on the Intertubes.
After sniffing the IP address + port of the IRC server and the channel name and password the botnet was using, I joined the channel with a regular IRC client. "/who #channel" listed thousands of compromised clients, including hundreds with .edu hostnames. (One university had a dozen hosts from .hr.[university].edu in the channel. Sleep tight knowing your direct deposit information is in good hands.)
There was no way I could notify everyone, so I concentrated on e-mailing abuse@ the .edu domains. In my e-mails, I explained who I was and where I worked, that one of our computers had been compromised by hackers (yeah yeah terminology), and that in the course of investigating, I found that computers at their university had also been compromised by the same hackers. I also included a list of the compromised hostnames at their university and the IRC server's information so their networking people could look for other compromised hosts connected to the IRC server if they wanted to. Relatively basic IT stuff.
I didn't get replies from the majority of the universities I sent messages to, including the .hr.[university].edu one. I got a few thank yous, but I got just as many replies from IT Security Officers and CIOs (including at big name universities) accusing me of hacking their computers and demanding that I stop immediately or face legal action.
Those people just didn't understand, and they were in charge of (or ultimately responsible for) their universities' IT security efforts... It was completely mind-boggling to me at the time.
You like the magic and you need a few practitioners but when things start getting weird, it's pitchfork o'clock.
I have a lurking feeling that in spite of all of the technologist/futurist optimism in our community, we are likely underestimating the pushback from the world at large when enough people at the same time are finally put out of work due to the same technological innovation we strive so furiously for in our own lives.
What I'm worried about is that as we move towards more ubiquity with computer technology in our lives, the "coder" will become a second string, blue collar job rather then a legitimate, organized profession.
The point is that not everyone will understand that the shifts are gradual, and that we're not going to sack thousands of convenience store clerks and simultaneously hire thousands of vending machine technicians because the technicians 'deserve' jobs and the clerks don't.
My point is that if enough of those disruptive technologies get introduced in a small enough time frame to put enough people out of work, then we might see some unexpected pushback.
Had you known about it, you could've got in touch with the "watch desk" and passed this information along. The watch desk has contacts for security folks at the majority of .edu's (in the US, anyway). I'd guess that about half of these "zombies" would have been offline in less than 24 hours.
I know this doesn't do you any good now, but in the event that someone else reading this discovers a security issue at a .edu in the future, I'd recommend contacting the watch desk before anyone else (either via phone or PGP-encrypted e-mail). They will, depending on severity, for example, call the .edu's security people's cell phones at 3 a.m. and wake them up, if it is warranted.
I was a member of REN-ISAC when I worked at a .edu. It is a vetted and very trusted community. Breaches of trust are dealt with quickly and severely. Any information you pass off to REN-ISAC will remain in good hands.
I tried to explain that she was wrong, but you can guess how well that went.
In high school, I was doing a programming course. I was working on my assignment in the library, when the librarian came in and started yelling at me for hacking. I explained that it was course-work, and she said "Oh, alright then.".
One week later, she came in yelling at me "I've already warned you once about this!", and kicked me out of the library.
Engineers aren't in charge, anywhere, other than tech companies.
Empirically speaking, a lot of the guys who graduated with their B.Sc. in computer science with me saw their career paths as joining a big consulting company, working on the front lines for a couple of years and then getting into management and leaving the code behind for good.
In my PhD program, most guys in the lab saw the actual engineering side of things as a stepping stone to higher-paid positions in acadaemia.
Clearly a significant number of people with engineering degrees are engineers only by title.
That is, people working on actual engineering aren't really "engineers" if they have their eyes set on something else, like a higher position in acadaemia.
Suddenly everything makes sense. :)
Seriously though, where are those stats coming from?
They understand their organisation will descend into chaos I their Operations are not controlled
But they probably always have lived with crap IT - and so so not understand what competitive advantages come from having IT well controlled. Give it thirty or so years
The whole meme started with Nick Carr's infamous Does IT Matter? editorial in the Harvard Business Review. He argued that while IT provided a competitive advantage in the past, it doesn't anymore. It's important for keeping up with the competition, but it will never put you ahead of the competition because it has been commoditized. All of his arguments made perfect sense at the time. And most IT organizations to date still take them to heart.
His arguments just assumed one thing incorrectly: they assumed that enterprise IT would never change in terms of the end user functionality it delivered. He assumed there was no more innovation to be had, that everything ever needed to be invented had been invented, and so we had reached the peak of functionality, like how you can't improve much upon the hammer and nail beyond perhaps the screw and electric power screwdriver.
Unfortunately, IT is treated like a commodity for most organizations, and commodities never get special attention.
His argument centered around these fortune 500 companies whom purchased big ERP systems and had custom development done for various parts of their businesses as a strategic investment (and a trade secret). It turned out that almost all of these companies built similar modules, since all of these companies on average hired smart managers that understood where inefficiencies could be eliminated via technology.
His correct conclusion was that these bits of tech were not strategic but rather simply the cost of doing business and thusly were open to commoditization.
Now here's where folks take a leap of faith and say that /all/ IT doesn't matter.
The way I look at it, all innovation can be strategic depending upon your business and its priorities. For most companies power and ping are commodities, but for Google it is a competitive advantage. Google would never outsource their operations to big blue, but for PWC that would probably be a good move.
Even at the micro scale you can see this in github repos where large companies will open source core modules but keep their competitive code proprietary.
Most CEOs think IT is a commodity like electricity - you cannot buy "better" electricity. But this is crap - way back when you could buy better electricity - the debate ranged from power smoothing to DC/AC - and your smelter or your lights could depend on the Chief Electrical Officer
oh hell I don't care anymore - anyone dumb enough not to think that an IT literate workforce working on IT-enabled processes cannot out perform an illiterate company (just as we now know a reading and writing literate workforce can) deserves to get Schumpeter-ed
And even today when it comes to the price it still matters. And in the developing world you still can buy electricity with different quality and uptime.
IS MBNA a typo for MBA or is this a specialized certification I've never heard of?
Next semester, though, I refused to sign the new AUP (which included a clause allowing the computer center staff to seize any computer I was using, even at my off-campus home), and they kicked me out of school. (Actually what happened was they locked my course registration account, and wouldn't reinstate it until I signed the policy in their presence. I refused.)
(Sadly, I can't find the full-disclosure thread for this bug. I guess I posted it to my blog, which I deleted after being threatened by school administrators. Oh well. That was 9 years ago!)
It all depends on what rules there are, and how they are enforced/interpreted.
Maybe it's because all of our schools are public?
For example higher ed. providers are funded based on enrollment and rate of graduation. If someone does not graduate, significant chunk (20-30%) of money won't be paid at all. This creates some incentive for the institution to actually guide and see that people don't fall through all kinds of cracks. I guess it's necessary when there is no ordinary paying customer relationship involved.
The main problem, in my view, is the professionalization of this institution-hopping class of university administrators. It used to be made up of senior faculty who got promoted to Dean, but now it's made up of an entirely separate group of people, often people who come from business management backgrounds, and who have little grounding in a particular institution's traditions or culture. They tend to think rather differently, in a more locked-down, policy-driven way, and apply broad "best practices" without much regard for how things are done in a particular place. Universities end up getting managed like a corporation, with similar kinds of policies.
Things are a bit better at small colleges (Rose-Hulman, Olin, Harvey Mudd, Wesleyan, Pomona, Colgate, etc.), which typically have much lighter-weight administration and a more pro-student, pro-experimentation attitude, as well as more success in en-culturating their administrators so they "get" the local culture and work with it. But they don't scale very well (I say this despite having gone to one and being a big fan of the undergraduate-college model).
Ministry of Education controls the money and conducts yearly performance target negotiations bilaterally with each higher education institution. You actually need a permit from the ministry to run any kind of school. Even our few "private" primary and secondary schools are publicly funded and regulated accordingly.
Independent expert body FINHEEC audits universities quality management schemes regularly. Some European countries use accreditation-based evaluation (for single degree programs) instead of system wide audits. At least one Finnish university has also acquired ISO 9001 cert, but it was seen as more labor intensive and not providing the same benefits (benchmarking, benchlearning) as the required peer-based audits.
There are Asian countries where this model has failed. Perhaps because of population pressure or other social factors. But I truly like the Nordic way of life.
For me, it was a way to steal the AFS space of the previous user (basically, they didn't expire the token... oops). I actually found the initial vulnerability by accident (something crashed due to network problems, reconnected and went, "WTF, those aren't my files!"), but I did find a good way to reproduce it on demand (yank Ethernet cord at proper time). Thankfully, I had read enough stories like this way back then and submitted the bug anonymously. This was ~2000 or around then, mind you.
I also tried to get university management to switch people over to using SSH way back in 1998, but it was something like 4-5 years before they eventually did so. I'm guessing they had no idea what I was talking about or why it even mattered back then, even though anyone could see everyone's passwords going over the wire with all the people who had to telnet for various reasons. Maybe they assumed that log file they were writing our activity to would catch anybody doing anything weird? It was cleverly named "resugol"--read that backwards if you're confused.
FWIW, the exams are quite thought-provoking nearly 10 years later, here's a link to them: http://cr.yp.to/2004-494.html
I remember reading the course syllabus online and being jealous despite already having worked in professional vulnerability research for a few years. You're lucky to have been at the class! Was he a good lecturer?
Here's a reading list; I'd add Zalewsky's _Tangled Web_ to it, but change little else: http://amzn.to/cthr46
That scared the crap out of me though and I realized this was a VERY bad idea. Something as harmless as trying to help someone make their website more secure can get you more jail time than robbing a bank.
I also, completely accidentally, logged into another student's account at my university (a big university too). The school gives you an ID number. Your initial password is the same as this ID, and you're supposed to change it later. I didn't remember my ID correctly, swapped two numbers in it, and ended up in someone else's account. Home address, phone number -- all sorts of information staring me in the face. Will I report this issue? Heck no!
It's weird how many of these I discover by accident. My school also had a hackathon hosted by eBay and PayPal. In fact, one of the programmers from PayPal was there. During the hackathon, I stumbled upon a way to get account information without authentication (security tokens were being seriously misused). The PayPal guy was shocked and asked me to send him all the information on what I had found. Never did get any sort of reward out of that... (and I lost the hackathon too).
This meme of "more jail time than robbing a bank" needs to end.
The federal penalty for possessing a firearm while robbing a bank is a mandatory minimum of 5 years and a maximum of life in prison. The mandatory minimum means that a judge could not sentence an armed bank robber for less than 5 years for each bank robbed while holding a gun (you don't even need to show it; just having it is enough). To make it worse, each 5-year gun sentence must run _consecutive_ with each other sentence (ie., be added on after you serve the other sentences).  If you brandish the gun, it becomes a mandatory minimum of 7 years, and if you fire it you get a mandatory minimum of 10 years .
Contrast that to all of the hacking charges we've discussed recently where the mandatory minimum is zero (a judge could sentence a convicted defendant to no penalty, or to probation).
To go further, the US Sentencing Guidelines , which are all-but-mandatory for federal judges (there's a constitutional out, but in effect most defendants are sentenced according to the Guidelines) gives "wire fraud" a base offense level of 7 (of 42+), which gives a sentencing range of either 0-6 months or 4-10 months, depending on how much economic harm is caused. Compare that to robbing a bank, which is a base offense level of 22, brandishing a firearm adds +5 for an offense level of 27, and if you actually make off with any cash add another +2 for an offense level of 29 (of 42+). The sentencing guidelines call for a sentence of 87-108 months (7-9 years) for a first-time bank robber, per bank, assuming that nobody gets hurt---plus the mandatory additional 5+ years for having a gun.
Realistically, bank robbers face a lot more time than even malicious computer criminals.
 See section (c) of 18 USC 924 http://www.law.cornell.edu/uscode/text/18/924
What's more, you don't even have to have a gun for it to be classed as "armed robbery". In the UK, just the threat of having a fire arm is enough (you could be brandishing a water pistol or even just making a gun gesture behind your unzipped coat).
You could ask in such a way that it comes across as a joke ("Anything more I can do for you today sir?" "A million bucks and a winning lottery ticket would be nice"), but if it comes across as a joke then the teller isn't going to give you any money.. because they think it is a joke.
> if it comes across as a joke then the teller isn't going to give you any money.. because they think it is a joke.
I'd also add that vast majority of malformed requests are denied. Only computers who have a sense of humour, so to speak, comply to the abnormal requests. Computer security is much closer to this scenario than carrying a gun, I feel.
For example, there is a world of difference between a panhandler asking you "Hey, can I have a couple dollars" in a populated touristy area during the day, and the same panhandler following you for several blocks at night before asking you that in an ally. One is just panhandling, but the other is effectively a mugging.
Computers don't really have those sort of cues, so it becomes difficult to make reasonable comparisons between the two.
Also handling stolen goods is a crime. So even if you didn't personally rob the bank, if you know the money is dodgy then you shouldn't accept it.
Not that robbing a bank is all that profitable vs. the risk and penalty's.
I wondered about the security of that solution, so I checked some random ID numbers to shockingly find out that about 80% of people didn't change their passwords! (I don't remember if you were actually prompted to change it upon first login, or you just had to do it by yourself). I could log in multiple times from the same IP to different accounts.
I hesitated whether to notify someone about it, or to loan a copy of "Mathematical analysis 1" or sth like that for some 100 people in the middle of the holidays within half an hour. That would be hilarious, but they would inevitably throw me out the university if they found out, so I didn't risk the action, neither notifying anyone due to the horror stories here and there.
Long story short, my manager disabled the firewall and we were hacked that night. I was let go the following day unceremoniously. I discovered soon after that the company blamed me for the attack, saying I turned the firewall off and hacked the servers myself.
The school immediately started expulsion proceedings without even contacting me. Fortunately, my advisor personally addressed the issue and had everything dropped. The drama only lasted a few days, but the schools brain dead response to the issue gave me zero confidence in their ability to review anything objectively. I was so disgusted I refused to walk in the graduation ceremony, much to my parents disappointment.
The actions of Mr. Al-Khabaz were unlawful and unethical. If he only accidentally found the flaw and reported it to the responsible person, things would be fine. But security testing without the permission of the system owner is the same as unauthorized access attempt!
I work as a security professional for 7 years, and I recently did a guest lecture on the college discussing the example like this. Most students were not aware where the problem is. Maybe it would help imagining how would story like this look in the physical world:
Let's suppose you come back home and find someone picking on your door lock with a lock picking tool. You ask him "what are you doing?" and he says "I'm just checking is your lock safe. I do it for your security." Would you believe him? Or would you call the police immediately, without asking him anything?
Let's add to this that security testing tools can sometimes degrade the tested system's performance or sometimes even crash it. In this case, it's not just unauthorized access attempt, but successful denial-of-service attack!
Never, ever, do a security testing of the system without the written permission of the system owner. If you get the permission, you will probably be asked to sign an NDA in return. You will also need to provide some information, like source IP address you're using and emergency contacts that can be used to stop the testing in case of problems (like crashes, etc.). This is the only lawful and ethical way to do these kind of procedures on someone else's system.
I'm not discussing if the penalty is OK in this case. It really doesn't matter if most people here cannot tell what he did wrong in the first place.
Not that I disagree with you: always ask for permission in writing from an authorized person before performing any kind of scan or security testing.
When someone is scanning your system and you haven't authorized it, you will definitively treat it as malicious. In a given moment, you don't care about attacker's inside motives, because your system is under attack and you better act accordingly.
I know a story about a guy who lost his job because of the unauthorized Nessus scanning in his company. Every story with a convicted hacker has some kind of a scanning tool (at least nmap) that was used in scanning phase, you can bet on it. Every scanning tool is an attack tool. In fact, scanners are most useful tools for any kind of attack, because they minimize amount of manual effort needed.
I don't know much about Canadian law, but most current laws forbid unauthorized access and _atempts_ of doing it.
Orthogonal to this fact is the question of what happens when an authority is brought in to solve the conflict. And something young hackers need to learn as early as possible is that you are not entitled to a due process in every possible context. It would be unlawful if you were not given the chance of a just trial in the context of a criminal or civil lawsuit, but this does not translate well into private institutions.
In particular case of a student unauthorized access within a university, this problem is compounded by the fact that such University and its representatives play the rules of prosecution, judge, jury and (sometimes) defense. You also have to consider that the people doing this are not professionals of law procurement but are pulled out of their real jobs to sort out some random mess, thus the only constrain is their common sense. I've even heard the first hand report of a case in my university where the faculty member supposedly playing "defense" was the most gung-ho about giving the boot to the guy in question (who ended up getting a one term suspension, but got to keep his scholarship, so it could have gone much worse).
This is probably not "fair", but it is the way it is and nobody seems interested enough to make it change. Education has a number of stakeholders with sometimes conflicting preferences and goals, so this is not a trivial problem.
But the point is that once your actions put you in the harms way, the abstract concepts of "fairness" and "proportionality of the punishment" are academic at best. My opinion is that legality is the bare minimum standard society imposes to keep barbarism at bay, but it is pretty rough itself. So it is in your best interest to conduct yourself in such a way that appeals to "the rules" happen as little as possible.
Of course it's distinguishable. Testing comes before attacking, to provide information. The two are otherwise completely unrelated. It'd dead-easy to distinguish between someone poking your fence and someone stealing your jewelery, for example.
If a random male servant is found to have gained unauthorized access to the princess' chamber, torture comes first and beheading comes last. In-between questioning regarding his intentions and the degree of fulfillment is optional.
There is a huge difference between catching someone in the act of breaking in, where it's reasonable to assume malicious intention, and noticing that someone entered and left, where you can see that they didn't do anything malicious.
Since the author of the article is known for partnering with students defending organizations, the whole story can be one sided, and it would be good to judge after hearing another side. E.g. it could be not the first issue, or there's traces of something more than just security inspection.
The main problem with unauthorized testing (putting aside technical problems) is that person who performs it is in _very_ difficult position explaining her intentions. She already did what is considered the _second_ stage in hacker attack. Until she can prove her good intentions, this is rightfully treated as a malicious attack.
This is what my equation means. I think everybody on this forum should be aware of this. Don't get yourself in trouble for not knowing this.
Considered by who? There's companies which pay you money if you can find bug in their software. And that's open offer, they don't say 'wait, we'll get ready at 8 p.m. friday and then you can check'. What do you think would Google do, if this student used scanner(or something else) on gmail and found bug and then told Google about it?
I still think that intention is key difference here. And as you said 'that person who performs it is in _very_ difficult position explaining her intentions'. That's why you shouldn't do any unauthorized checks, because even if you wanted to tell about your findings to the relevant authorities, you can be caught before that and then you'r screwed. But Mr. Al-Khabaz informed university/company and was initiator of that talk, so it kinda clears him. He was able reasonable explain his intentions and his punishment could be just some warning(of course if there's no any significant moments we don't know about). Also he didn't get any credit for help he did by finding the bug.
Phase 3—Gaining Access
Phase 4—Maintaining Access
Phase 5—Covering Tracks
Regarding this guy's intention, you're probably right. The main reason why I'm commenting here is that guys with good intentions don't get themselves in the trouble for not knowing what they're doing.
Finding vulnerabilities in software on your machine and hacking other people's systems are entirely different things. By testing software you're not violating anything (except maybe EULA for some licences). By hacking other people's systems, you're committing a crime.
> What do you think would Google do, if this student used scanner(or something else) on gmail and found bug and then told Google about it?
At first, they would treat it like an attack. Like almost any other company would do. I have no idea what would happen later.
Scanning is an active method and it's done on the attacked system. Web scanning is not the same as web crawling (downloading pages of the site). It include all kinds of invasive tests, like SQL Injection, XSS, command injection and other attack attempts. It can cause many kinds of problems, named here in this thread.
From security perspective, scanning is an attack. Everyone who uses these tools should be aware of this.
In a general sense It's not difficult to find instances of behaviour that, while lawful are far from ethical, so those to things don't necessarily travel together. Some examples: http://en.wikipedia.org/wiki/Sexual_Sterilization_Act_of_Alb...
Obviously this could be a long list...
In this specific instance it seems that his information was exposed by this flaw along with everyone else's. Wanting to verify the safety of your own information feels like a pretty reasonable and ethical thing.
I think I would rephrase your example a little: "Let's suppose you let someone store their stuff at your house you come back home and find them picking on your door lock with a lock picking tool. You ask him "what are you doing?" and he says "I'm just checking is your lock safe. I do it for your security." Would you believe him?"
If you are in business of finding vulnerabilities in IT systems, you should be aware of it. If for noting else, to save yourself form situations like this.
This guy is not a security professional (yet), but running vulnerability scanners on other people systems definitely puts him in context.
what can happen when production Web applications are tested including:
Junk data inserted into databases
News feeds filling with random input
Log files filling up
Accounts getting locked out
Internet bandwidth consumption
Scans that take longer to complete
High server and database utilization
Incident response teams and managed security providers having to deal with alerts
Final cleanup needed after the fact
Web scanners do massive offensive attacks. They basically DOS attack your site in many ways, trying millions of attack vectors.
Mitigating against vandalism is very hard. It hurts users the more you do. Generally you leave it as open as possible and it is ok, since it's not a security issue per se and most sites can live their lives never having been attacked this way.
There's no money in vandalism and unless you piss off skilled or determined people it won't be abused.
Someone could write a script to cause thousands of $ damage to wikipedia without much trouble. But wiki chose's to leave itself open and take the risk. They don't have a bug. They are trying to do the right thing by users.
I think you mentioned different issue here, puerto called unauthorized check 'unethical' and you talk about performance. If Mr. Al-Khabaz used some noninvasive scanner, which didn't bring any serious technical overhead, is it ok by you?
> Someone could write a script to cause thousands of $ damage to wikipedia without much trouble. But wiki chose's to leave itself open and take the risk. They don't have a bug.
I don't really understand what do you mean when you say 'open', open to what? But I think wiki has some protection mechanisms, because at their scale if someone could easily bring them down, someone would.
Yes, passive scanning is fine with me, it's probably legal in most countries, but this is not certain (See Google and wifi). But I don't see the relevance to the conversation.
Passive automated scanning is fairly useless so it's not really used.
Fact is he broke the law at a criminal level and caused damage, if you can't see this, you really have no idea of the reality of the technology he was using.
But what should happen to him for it is a discussion for a different thread.
More accurate would be catching your tenant picking every single apartment's lock to prove that their personal lock is vulnerable.
Warning the system owner doesn't give you the ability to run pen tests if they do not wish you to do so.
I would believe that it would really only make a difference if the systems administrator replied to your warning with acceptance and an invitation to do so.
Morals being subjective, how do you feel it would change the legal conditions?
The trespassing, using a system in nonstandard ways could still be considered "malicious", even if the user's intent was not. (I'm not making judgments on the guy so much as imagining that prior warning is not sufficient.)
This seems similar in many respects to the Aaron Swartz case. My initial response rejects the idea that all actions regardless of motive should be taken as equally unlawful and unethical.
I'm not sure he should be expelled, but definitely reprimanded.
The industry and the legal system doesn't have a pigeon hole for that. You'll be labeled as "hacker" (and not in a positive sense of it). Either disclose the vulnerability immediately to get recognition, hoping it is public enough they'll be ashamed of going after you, or or sell and profit from it. You are already treated as a criminal by these large institutions, so if you go in that direction might as well make some money.
After testing this on my own account, I reported it right away to the university. They thanked me and fixed the problem within days.
But after reading these horror stories, I feel extremely lucky that they didn't do something much stupider. My entire academic career could have been destroyed, as well as my professional one if they'd decided to press frivolous charges.
People who go after security bug reporters tend to never fix the bugs in question. They're, like, too righteous for it.
edit: see the $800k 'damages' Gary McKinnon allegedly caused. It's not like he smashed their equipment with a sledgehammer or something.
Two days later, Mr. Al-Khabaz decided to run a software program called Acunetix, designed to test for vulnerabilities in websites, to ensure that the issues he and Mija had identified had been corrected.
If you find a security flaw in a system and report it, receiving positive feedback doesn't automatically imply that you have permission to conduct further tests. A web application vulnerability scanner can cause damage to production systems.
Almost anyone can just download a scanner and run a wild test using default settings. But its illegal to do it without prior authorization.
While his intentions were good, I think it was a bit naive of him to take upon himself the responsibility to make sure the flaws were fixed and conduct a test. Even when you have permission to conduct a test you stick to the scope and limits of the agreement. You cant just keep leapfrogging networks as you find holes.
Manually finding holes/bugs accidentally and reporting them is different from running a vulnerability scanner.
I dont think he should have been expelled without giving a chance to explain his story and the way they did it was not ethical. The management over reacted, especially considering there was no damages mentioned in this case.
> While his intentions were good, I think it was a bit
> naive of him to take upon himself the responsibility to
> make sure the flaws were fixed and conduct a test.
While I doubt his intentions were malicious, it certainly seems like he got curious / excited from his first find and went looking for more.
With that being said, I definitely feel for the guy. I can certainly understand the intrigue and curiosity that would lead him to continue his exploration. It sucks that they decided to bring the hammer down so hard.
In the second scenario, you probably are hurting innocent people.
So if you have a moral compass, you should maybe bother being an anonymous white hat.
That said, please don't think this is going to end your career. There are a lot of companies and startups that would love to have you for your kind of initiative. Not having a degree that you don't seem to need anyway will not be a sticking point with them. And the option of starting your own consultancy is a possibility - you already have some publicity that can help with initial gigs.
If you'd like to try your hand at a job, do check out ThoughtWorks (www.thoughtworks.com). We don't usually stand on ceremony or make a fuss about qualifications.
We're a little far away (Australia), but otherwise you'd get in the door for an interview at the very least.
Furthermore, any student faced with potential expulsion would have been entitled to a series of quasi-judicial hearings and assistance in preparing their defence. To expel someone for non-academic reasons from a publicly-funded institution (which Dawson is) should not be taken lightly and surely never in a fashion where the accused is not permitted to present their case.
This happened to me twice in college, minus the expulsion part. In the less interesting case the University sent around a form to be used in nominating student speakers for commencement. It included a drop down that was keyed off of student id. Student ids were regarded as private.
The school required everyone to either buy health insurance from them, or provide proof of insurance. They had a webapp where you could report this data. The login required your student id, name, and birth date (thanks Facebook). If you visited the app after using it, the form auto-populated with your health insurance information. I brought it to the attention of the University and they took down their nomination app in a matter of minutes.
In the more exciting incident, someone at Sungard called my university and asked them to have the campus police arrest me. (Edit: Quite boring, really http://seclists.org/bugtraq/2008/Jan/409)
Now they are.
Since I wasn't really trying to hide anything, so one of the IT guys must have seen me with shell access and reported me. My punishment was having my ethernet turned off in my dorm room (even though the incident occurred in a computer lab while the dorm's ethernet was turned not ready for use yet). I appealed the decision and met with the Dean, and she said I was considered a threat to the school so I should be happy that my punishment wasn't worse.
Anyways, the rest of the year in the dorm was spent playing a cat and mouse game. I used my computer on my roommate's LAN port, so they ended up shutting off his ethernet as well.. I felt bad about that, especially since they refused to give him internet access for the rest of the year. So I ended up making a 50 foot ethernet cable and running it through the bathroom into another person's room (Two 2-person dorm rooms were connected by a common bathroom). That got shut off, so I bought a new LAN card (to get a new MAC address) and connected to another ethernet drop. I was able to get online for the rest of the year, but that sure left a sour taste in my mouth for my school.
Edit: I remember one close call... over a break (I was one of the few people in the dorm), water came out of the shower drain and flooded our rooms. I came back from spending the day out to see the Dean going into our room to inspect the damage, and I quickly had to hide my 50 foot cable that went through the bathroom.
What message does this send to other students at Dawson? Don't be curious; don't go out of your way to do a favour for the safety of your peers; keep your mouth shut and we'll hand you your degree.
Someone give him a scholarship to a legit university!
Unfortunately, if they were at all competent they wouldn't be teaching at a place like that. CS programs at minor universities are notoriously poor and staffed by whoever they could get, and it's not going to be anyone that can make decent pay working on current technology.
While I'm sure they wouldn't get the cream of the crop, there's reportedly an excess of under-employed & under-paid PhD's and post-docs in a number of STEM fields (again, specifically in academia).
Anyone who is actually teach a CS course at a CC or a CEGEP and who is doing it as a full time job is doing it for non-pecuniary reasons, inclusive of being incompetent but having attained a qualification sufficient to teach.
Point being, if you can't hold up to a white hat scan, you're likely already hacked. Security is how you enforce your policy. But it's only white hat until data is compromised, and that's where the prosecution comes in.
In the meantime, until we can make this understood, we need to make the workaround understood: if you find a security flaw in a system you don't own, and you haven't been formally hired for the specific purpose of finding that flaw, ignore it and get on with your life; it's not your problem. Going out of your way to help people in normal circumstances is noble. Going out of your way to help people who will reward you with a knife in the back is a mistake. Don't make that mistake.
"Ethan Cox is a 28-year-old political organizer and writer from Montreal. He cut his political teeth accrediting the Dawson Student Union against ferocious opposition from the college administration and has worked as a union organizer for the Public Service Alliance of Canada."
Yes, even Google and Microsoft have bugs in their software. This isn't an excuse to bully people who tell you about the bugs in yours. The difference between you and Google is that Google pays people who find bugs in their software, especially serious security flaws, even if they aren't employed by Google, rather than threatening them with legal action.
I can understand Ahmed's youthful curiosity about whether the vulnerabilities that he identified had been fixed...But he had handed off the info to the Dawson College IT team and the ball was no longer in his court.
Running Acunetix against the college's/SkyTech's server(s) was a pretty dumb move. But hell, when you are in your early 20s, that's when you are supposed to make dumb mistakes.
I'm all for teaching moments, but this "One Strike And You Are Expelled" issue irks me.
Ultimately, this is about Edward Taza of Skytech Communications being sleazy and manipulative by threatening a scared, inexperienced 20 y/o college student with expensive legal action and implying the possibility of jail time unless he signed a non-disclosure agreement.
The EFF should probably take a look at this.
I would now never report a security flaw without a iron clad set of laws in place to protect the rights of white-hats, whether we are licensed and approved security researchers or not.
If you are going to be a lying asshole and deny something, do yourself a favor and deny it outright. Don't try to imply that you were just having a friendly conversation about "legal consequences" right before you solicit someone to sign a non-disclosure agreement. No one in the world will believe you weren't trying to intimidate this poor kid into compliance.
No, it didn't, because he was blackmailed into the NDA. It's completely unenforceable. It was signed under duress and only benefited one party.
It's not like it magically binds your tongue. It just makes it easier to sue you if you violate it. The fact that the student could win in a suit is irrelevant. He couldn't afford the time and money to fight.
Before he signed the NDA, they would have had a harder time suing him. Perhaps he could have spent merely $10k and gotten it quickly dismissed. After, the company could make it arbitrarily expensive for him to fight it. If he could have eventually proved coercion (which I'm honestly skeptical of) then he would have been off the hook -- after years of stress and massive lawyer bills.
Sensationalist journalism is what it is. After a little bit of research, I discovered it's written by someone who used to be in Dawson's Student Union, so I guess he has a teeth against the administration.
After all, there is private data insufficiently safeguarded. Some poor girl could end up getting stalked if the right kind of sleeze came across this.
The threats by the Skytech CEO Edouard Taza; the college not allowing the professors to hear the student before voting; his transcripts vandalized with zeroes so he cannot continue his studies elsewhere... What exactly is the relationship between Skytech and this college?
I've signed the petition to reinstate Hamed:
Hamed, stick to your guns. You did the right thing.
Now why is this story different this time? I'm not too sure since I've left a couple years ago, but my guess would be that the college administrators have taken this decision. Knowing Edouard Taza, I doubt he would have pushed for this student to be expelled, since he clearly has a great future in software and could be one day employed at Skytech to fix even more security holes.
Edit : hadn't finished reading the article, it seems the professors decided to kick the student out : "Following this meeting, the fifteen professors in the computer science department were asked to vote on whether to expel Mr. Al-Khabaz, and fourteen voted in favour." To me what this says is their computer science department is full of idiots. Any good CS professor would have understood that Hamed didn't have any malicious intent.
He got kicked out of CEGEP. He'll survive unharmed. Sad that he thinks getting publicity is worth it though.
So my friend showed it to me and I suggested he tell the IT department. Obviously, the next thing we know, he's accused of "Hacking" and get menaced by the IT department.
A couple days later, we check back the website and realize that a trivial encryption is added.. I.e. you have to reverse the student number or something like that. And, obviously, just on the client-side.
A little bit pissed, we decided to take our revenge of being menaced for just being nice. So we create a web page where it explains the story (That we found an entry point, that we told the IT, etc.) and then, we say "Try it!" [<enter student number>] which directly logs you in into their account.
We e-mail that page to the main directors of the school by suggesting a quick fix. And, we make sure to CC the IT departments.
The day after it was fixed and we received a real "thanks" from the authority. I guess the trick is to contact a higher authority rather than directly contacting the IT department.
Basically, they say Ahmed did more than just what is reported in the article, and they can't publicly say what he did - because that's private info about Ahmed that they're legally obliged to protect.
Now I'm not taking a position in favor of the college or in favor of Ahmed. I'm just saying, it's not all black (or white). The National Post article is biased and we're missing some info. We should remember about that before going crazy on the witch hunt.
Based on other stories of bureaucratic ignorance it's easy to jump on the administrative / cover-up blame train, but something about this doesn't quite mesh, and the fact that the story's only alibis are 1) Ahmed and 2) a generic students' rights organization makes it difficult to digest.
The faculty told me that there are other things that caused this and they are unable to discuss them with me.
I wish it were possible to get that information but I know them and I trust them.
We just don't know.
How long did it take sony to fix their issues? Oh, right, it took someone to explose it publicly. Heh. It's unfortunate how broken some IT organizations are and that they would rather kill the messenger than fix things.
You got the picture.
In big companies it might take some time.
Essentially is is very broken system that destroys itself.
It is like you need a manager to watch over a manager that watches over a manager.
It is funny to work at such companies, I got fired from one when I said everything I think about them.
That's not to say that the expulsion still doesn't reek of BS, but Ahmed's hands are not completely clean here.
Again, the school is on record as giving him kudos for reporting the error - it's perfectly reasonable to assume that someone will not launch offensive penetration testing tools at your site, without notice or permission, just because they have reported the bug in the past.
He could have tested the bug without the pentest software, besides. Just because someone points out a crack in your window doesn't give them carte blanche to try breaking it after you said you fixed it.
He has a key, they let him in, that's their job. The problem is that he could open his box, or any other box, without actually using the key.
Again, the problem isn't that he found and disclosed a bug, the problem is that he attempted to exploit that bug after the fact.
You do not have the right to do that. Pure and simple.
Finding and disclosing a bug is one thing, utilizing it is something else entirely.
Let's assume it was not SQLi but an authorization application logic bug ie: by changing parameter passed by browser allowed access to whole record set. He did the right thing and told the vendor -- but after the fact he ran a tool that probably simulated SQLi on every damn parameter!
Like smashing a car window after telling the owner he has left it unlocked.
Even a brain dead sysadmin would notice it In the logs, and likely whatever SIEM would fire a high priority alert.
He did this without auth and the company did the right thing here. In this post aaronsw world we can't just assume that every n00b clown whitehat hacker is totally innocent of all crimes even if done with the best intentions. People need to take responsibility for their actions. An ignorant click can be just as criminally negligent as stabbing a dude in the face.
Based on the article, your life probably doesn't feel so good right now. Sorry to see a bright person in such a situation.
Give me a ring if you are looking for an internship, job or start-up experience in Montreal. We are in town (walking distance from Dawson actually). By the nature of our business, we also have good connections with academia if that can help (www.tandemlaunch.com).
My login is my name so you can reach me at [firstname].[lastname]@tandemlaunch.com
Just go to the school paper or town paper and let them report it.
He did great up to the point where he tried to pen-test after reporting it. I understand the intellectual curiosity to see if people are doing their jobs and it's too easy to armchair quarterback but if you bring attention to yourself by reporting a problem you can be sure they will watch you and not necessarily the problem.
The best action to take while you find a security flaw is to do nothing. Let some one evil abuse the flaw and make the guys miserable enough to realize the importance of a responsible disclosure.
Without this the guys ego is going to take this as- 'How dare he point a problem in my/our work' and not 'Thanks for saving my life before some body could screw me'.
This is illegal! Most people seem to be missing this.
If you're going to break the law at your own University at least cover your tracks.
Don't annoy the crap out of them(Rightly or wrongly) then go on to black hat them.
Remind me to never, ever use Omnivox, or any Skytech software, ever.
What a clusterfuck. Since when do CEGEPs expel students for running security checks?
I've seen things like this happen before. You find a bug, you report it, they tell you "oh we're getting on it immediately". Some time goes by and you think, hey, did they fix it? You look, discover "nope", think "man I bet those guys would fix it if I lit a fire under their ass" and try and use the bug to deface the site, or something.
this is logic that makes sense to a 20 year old (speaking as a former 20 year old..). I've seen that happen before. the article doesn't say this, but perhaps reading between the lines the second attempt did not have a pure motivation behind it...
We brought it to the attention of the head of the IT Department by email. Later that week, the head visited our morning class to discuss this with us.
He discussed the issue to the class and actually acknowledged his appreciation for students like us for reacting promptly and responsibly over the issue.
I'd give it a shot if they fired their president, but that's an unrealistic expectation.
Missed that part - now it makes me think back on my suggestion. Probably, he should just look around on HN. :-)
On a serious note, can't he appeal to any education monistry outside college?