Hacker News new | past | comments | ask | show | jobs | submit login
Youth expelled from Montreal college after finding security flaw (nationalpost.com)
696 points by lasercat on Jan 21, 2013 | hide | past | favorite | 300 comments



I've already posted my "almost got arrested for using zsh" story, so here's another one:

I used to work at a large public university. One day, a grad student brought me his laptop and asked if I would take a look at it because "the Internet [was] really slow." It turned out that his computer was part of a botnet controlled via IRC, and it was being used to attack hosts on the Intertubes.

After sniffing the IP address + port of the IRC server and the channel name and password the botnet was using, I joined the channel with a regular IRC client. "/who #channel" listed thousands of compromised clients, including hundreds with .edu hostnames. (One university had a dozen hosts from .hr.[university].edu in the channel. Sleep tight knowing your direct deposit information is in good hands.)

There was no way I could notify everyone, so I concentrated on e-mailing abuse@ the .edu domains. In my e-mails, I explained who I was and where I worked, that one of our computers had been compromised by hackers (yeah yeah terminology), and that in the course of investigating, I found that computers at their university had also been compromised by the same hackers. I also included a list of the compromised hostnames at their university and the IRC server's information so their networking people could look for other compromised hosts connected to the IRC server if they wanted to. Relatively basic IT stuff.

I didn't get replies from the majority of the universities I sent messages to, including the .hr.[university].edu one. I got a few thank yous, but I got just as many replies from IT Security Officers and CIOs (including at big name universities) accusing me of hacking their computers and demanding that I stop immediately or face legal action.

Those people just didn't understand, and they were in charge of (or ultimately responsible for) their universities' IT security efforts... It was completely mind-boggling to me at the time.


Evidently the corollary to Arthur C Clarke's famous quote on technology and magic is that those who create it are witches and wizards.

You like the magic and you need a few practitioners but when things start getting weird, it's pitchfork o'clock.


I don't know if he was the first, but I assume you got this from Robert Graham's recently proposed corollary "Any sufficiently technical expert is indistinguishable from a witch" http://erratasec.blogspot.com/2013/01/i-conceal-my-identity-...


It actually just occurred to me spontaneously but I'm pleased to find myself in good company


The problem with that corollary is that the modern equivalents of witches are doctors, but there isn't really any significant negative connotation there for the average person. It's more like technical expertise + shitty marketing.


What an incredibly succinct way to put it. Props.

I have a lurking feeling that in spite of all of the technologist/futurist optimism in our community, we are likely underestimating the pushback from the world at large when enough people at the same time are finally put out of work due to the same technological innovation we strive so furiously for in our own lives.


I've always doubted we would ever encounter this situation to be honest. The invention of vending machines didn't put convenience stores out of business but it did create a new class of technician to service them.

What I'm worried about is that as we move towards more ubiquity with computer technology in our lives, the "coder" will become a second string, blue collar job rather then a legitimate, organized profession.


When I made this point to my stepfather as a child, he got indignant, claiming that I was saying that the jobs getting replaced 'weren't good enough' and the people who did them 'didn't deserve jobs'.

The point is that not everyone will understand that the shifts are gradual, and that we're not going to sack thousands of convenience store clerks and simultaneously hire thousands of vending machine technicians because the technicians 'deserve' jobs and the clerks don't.


You're right - the invention you mentioned did not put stores out of business. But there have been inventions and technologies and new business models that have put people out of work before. That's not a controversial fact I think...

My point is that if enough of those disruptive technologies get introduced in a small enough time frame to put enough people out of work, then we might see some unexpected pushback.


I wonder whether, instead of the "traditional" software industry feeling the brunt of that push-back, it'll be the robotics industry?


Perhaps; but I think the anger will be directed toward the perceived elitist "intellectual class". I mean, we already see a lot of that rhetoric in politics. (At least in American politics.)


That conjured up images of Unseen University...


REN-ISAC[0] exists for incidents like this.

Had you known about it, you could've got in touch with the "watch desk" and passed this information along. The watch desk has contacts for security folks at the majority of .edu's (in the US, anyway). I'd guess that about half of these "zombies" would have been offline in less than 24 hours.

I know this doesn't do you any good now, but in the event that someone else reading this discovers a security issue at a .edu in the future, I'd recommend contacting the watch desk before anyone else (either via phone or PGP-encrypted e-mail). They will, depending on severity, for example, call the .edu's security people's cell phones at 3 a.m. and wake them up, if it is warranted.

I was a member of REN-ISAC when I worked at a .edu. It is a vetted and very trusted community. Breaches of trust are dealt with quickly and severely. Any information you pass off to REN-ISAC will remain in good hands.

[0]: http://www.ren-isac.net/



In high school I had to write a long apology essay in part because my computer teacher testified to the principal that the Windows command line is "a high-security area of the computer that students have no business accessing."

I tried to explain that she was wrong, but you can guess how well that went.


While we're sharing anecdotes...

In high school, I was doing a programming course. I was working on my assignment in the library, when the librarian came in and started yelling at me for hacking. I explained that it was course-work, and she said "Oh, alright then.".

One week later, she came in yelling at me "I've already warned you once about this!", and kicked me out of the library.

* confused-look


I would guess that being a CIO is 80% about management/people skills and 20% about technology. Hopefully that goes some way to explaining why these people did not understand your email.


"People skills" includes understanding your areas of competence and getting assistance from your underlings where needed. Sending back a nastygram because you don't understand what's going on is as much a failure of people skills as it is technology skills.


This is a C-level position at a publicly-funded institution, that ratio is closer to 95% and 5%. I would even go so far as to say that these individuals very likely have a background in law or simply have an MBNA.

Engineers aren't in charge, anywhere, other than tech companies.


30% of MBA's are engineers, and the most common degree for CEO's is engineering. 1/3 of S&P 500 CEO's have an engineering degree, even though only a small fraction of the S&P 500 is tech companies.


A lot of people get engineering degrees as a signalling mechanism to prove they can do hard work, not because they have any interest in becoming engineers. Formal training in a subject combined with a lack of intrinsic curiosity about the subject makes the worst engineering managers you will ever meet.


I think this is probably true.

Empirically speaking, a lot of the guys who graduated with their B.Sc. in computer science with me saw their career paths as joining a big consulting company, working on the front lines for a couple of years and then getting into management and leaving the code behind for good.

In my PhD program, most guys in the lab saw the actual engineering side of things as a stepping stone to higher-paid positions in acadaemia.

Clearly a significant number of people with engineering degrees are engineers only by title.


The claim was that "engineers are never the one in charge." If your argument is that engineers cease to be engineers once they get into management, then it's tautological that "engineers are never in charge."


The point as I read it was that people who aspire to management aren't really engineers in the truest sense, but engineers "only by title".

That is, people working on actual engineering aren't really "engineers" if they have their eyes set on something else, like a higher position in acadaemia.


http://www.theregister.co.uk/2013/01/21/engineers_cold_and_d...

Suddenly everything makes sense. :)

Seriously though, where are those stats coming from?


I still can't wrap my head around this. The CFO damn well understands finance. The COO understands operations. Why aren CIOs held to the same standard?


CFO understands finance because the people who hire CFOs know their organisation will bleed out if money is not controlled - they understand the consequences of mismanaging IT

They understand their organisation will descend into chaos I their Operations are not controlled

But they probably always have lived with crap IT - and so so not understand what competitive advantages come from having IT well controlled. Give it thirty or so years


Highly doubtful. My day job is at big IT company. Possibly the most well-known in history. You can guess. I'm the lead guy on my team for running our quality control. 6-sigma style stuff. The guy in charge of international training for this quality program said, "The fact is, IT is now a commodity."

The whole meme started with Nick Carr's infamous Does IT Matter? editorial in the Harvard Business Review. He argued that while IT provided a competitive advantage in the past, it doesn't anymore. It's important for keeping up with the competition, but it will never put you ahead of the competition because it has been commoditized. All of his arguments made perfect sense at the time. And most IT organizations to date still take them to heart.

His arguments just assumed one thing incorrectly: they assumed that enterprise IT would never change in terms of the end user functionality it delivered. He assumed there was no more innovation to be had, that everything ever needed to be invented had been invented, and so we had reached the peak of functionality, like how you can't improve much upon the hammer and nail beyond perhaps the screw and electric power screwdriver.

Unfortunately, IT is treated like a commodity for most organizations, and commodities never get special attention.


I've read the original article a while back so forgive me if my memory is a bit off.

His argument centered around these fortune 500 companies whom purchased big ERP systems and had custom development done for various parts of their businesses as a strategic investment (and a trade secret). It turned out that almost all of these companies built similar modules, since all of these companies on average hired smart managers that understood where inefficiencies could be eliminated via technology.

His correct conclusion was that these bits of tech were not strategic but rather simply the cost of doing business and thusly were open to commoditization.

Now here's where folks take a leap of faith and say that /all/ IT doesn't matter.

The way I look at it, all innovation can be strategic depending upon your business and its priorities. For most companies power and ping are commodities, but for Google it is a competitive advantage. Google would never outsource their operations to big blue, but for PWC that would probably be a good move.

Even at the micro scale you can see this in github repos where large companies will open source core modules but keep their competitive code proprietary.


I think we are agreeing - it is rare that a senior board member has seen how effective and competent IT embedded through out a organisation can transform that organisations flexibility, responsiveness etc etc.

Most CEOs think IT is a commodity like electricity - you cannot buy "better" electricity. But this is crap - way back when you could buy better electricity - the debate ranged from power smoothing to DC/AC - and your smelter or your lights could depend on the Chief Electrical Officer

...

oh hell I don't care anymore - anyone dumb enough not to think that an IT literate workforce working on IT-enabled processes cannot out perform an illiterate company (just as we now know a reading and writing literate workforce can) deserves to get Schumpeter-ed


> But this is crap - way back when you could buy better electricity - the debate ranged from power smoothing to DC/AC - and your smelter or your lights could depend on the Chief Electrical Officer

And even today when it comes to the price it still matters. And in the developing world you still can buy electricity with different quality and uptime.


> I would even go so far as to say that these individuals very likely have a background in law or simply have an MBNA

IS MBNA a typo for MBA or is this a specialized certification I've never heard of?


Oh, yes I meant MBA. Sorry for the confusion.


The person answering emails for abuse@... shouldn't be the CIO or other management type. It should be a tech support person who has a modicum of clue, and knows who to forward the email to who can deal with and respond to it appropriately.


I'm not disagreeing with you at all, but if they don't understand the technology, then they should have an underling who does monitoring the abuse@ mailbox...


My rule of thumb, is not to bother contacting another .edu unless I know someone there. It isn't worth the pain and possible career problems.


I found something like this at my school. The administration reacted similarly. But fortunately, I was taking djb's Unix Security Holes at the time, and a harshly-worded note from djb to the Computer Center folks ended up getting me a thank you.

Next semester, though, I refused to sign the new AUP (which included a clause allowing the computer center staff to seize any computer I was using, even at my off-campus home), and they kicked me out of school. (Actually what happened was they locked my course registration account, and wouldn't reinstate it until I signed the policy in their presence. I refused.)

(Sadly, I can't find the full-disclosure thread for this bug. I guess I posted it to my blog, which I deleted after being threatened by school administrators. Oh well. That was 9 years ago!)


These expulsion stories sound really weird. I mean you pay for all of your studies and still could get axed on a whim? Whereas in my country I get paid to study and have zero chance of being expelled for these kinds of events.


Even countries in Scandanivia will expell people who break college rules. And if you pay for and go on a train in finland and break their rules, you can be kicked off.

It all depends on what rules there are, and how they are enforced/interpreted.


Which country may I ask? Nordic?


Yes, Finland.

Maybe it's because all of our schools are public? For example higher ed. providers are funded based on enrollment and rate of graduation. If someone does not graduate, significant chunk (20-30%) of money won't be paid at all. This creates some incentive for the institution to actually guide and see that people don't fall through all kinds of cracks. I guess it's necessary when there is no ordinary paying customer relationship involved.


My experience in the U.S. is that public universities aren't much different from private universities (at least the nonprofit ones) on these kinds of policies. They might be better in other respects, such as lower tuition, but they're run by similar kinds of administrators. Often literally the same administrators: there's a lot of churn as people hop between institutions.

The main problem, in my view, is the professionalization of this institution-hopping class of university administrators. It used to be made up of senior faculty who got promoted to Dean, but now it's made up of an entirely separate group of people, often people who come from business management backgrounds, and who have little grounding in a particular institution's traditions or culture. They tend to think rather differently, in a more locked-down, policy-driven way, and apply broad "best practices" without much regard for how things are done in a particular place. Universities end up getting managed like a corporation, with similar kinds of policies.

Things are a bit better at small colleges (Rose-Hulman, Olin, Harvey Mudd, Wesleyan, Pomona, Colgate, etc.), which typically have much lighter-weight administration and a more pro-student, pro-experimentation attitude, as well as more success in en-culturating their administrators so they "get" the local culture and work with it. But they don't scale very well (I say this despite having gone to one and being a big fan of the undergraduate-college model).


How do you prevent the schools from just lowering graduation requirements in order to artificially boost the percent of graduates and get a better payout?


Since they are either fully government funded or jointly funded with municipalities, there are no incentives to search short term profits by running diploma mills.

Ministry of Education controls the money and conducts yearly performance target negotiations bilaterally with each higher education institution. You actually need a permit from the ministry to run any kind of school. Even our few "private" primary and secondary schools are publicly funded and regulated accordingly.

Independent expert body FINHEEC audits universities quality management schemes regularly. Some European countries use accreditation-based evaluation (for single degree programs) instead of system wide audits. At least one Finnish university has also acquired ISO 9001 cert, but it was seen as more labor intensive and not providing the same benefits (benchmarking, benchlearning) as the required peer-based audits.


Outline of FINHEEC audit process and outcomes: http://www.qaa.ac.uk/Partners/education/Documents/FINHEEC%20...


Well, that is a problem. But universities also doesn't want to be known for poor quality. And then there is pretty strong government oversight. In Sweden the National Agency for Higher Education do regular audits and have the right to remove a schools privilege to award degrees.


And it is a right they also exercise, but rarely against an entire university. The schools usually just lose the privilege for one subject.


I would imagine that they self-audit a lot better than the US based degree mills, but I'd be fascinated in the specifics as well.


> Maybe it's because all of our schools are public?

There are Asian countries where this model has failed. Perhaps because of population pressure or other social factors. But I truly like the Nordic way of life.


Could be low corruption. The nordic countries usually score low on corruption comparisons. High corruption can make almost any system break.


I'm wondering these days if you can be any sort of hacker at all without finding some kind of vulnerability in your college's network.

For me, it was a way to steal the AFS space of the previous user (basically, they didn't expire the token... oops). I actually found the initial vulnerability by accident (something crashed due to network problems, reconnected and went, "WTF, those aren't my files!"), but I did find a good way to reproduce it on demand (yank Ethernet cord at proper time). Thankfully, I had read enough stories like this way back then and submitted the bug anonymously. This was ~2000 or around then, mind you.

I also tried to get university management to switch people over to using SSH way back in 1998, but it was something like 4-5 years before they eventually did so. I'm guessing they had no idea what I was talking about or why it even mattered back then, even though anyone could see everyone's passwords going over the wire with all the people who had to telnet for various reasons. Maybe they assumed that log file they were writing our activity to would catch anybody doing anything weird? It was cleverly named "resugol"--read that backwards if you're confused.


Did you pass that course?


I got a B. The homework was to find and write an exploit for 10 security holes in deployed software, but I only found 2. (3 including the one above, which I must have found the week or so after exams. The holes I found were in nasm and in some amateur open-source smtpd.)

FWIW, the exams are quite thought-provoking nearly 10 years later, here's a link to them: http://cr.yp.to/2004-494.html


Didn't he famously fail the whole class one of the times he gave it?


I don't think so, as he's only taught the class once and I didn't fail it:

http://cr.yp.to/courses.html


There was a Slashdot story about it.

I remember reading the course syllabus online and being jealous despite already having worked in professional vulnerability research for a few years. You're lucky to have been at the class! Was he a good lecturer?


What did you think of the course textbook ("Exploiting Software", Hoglund & McGraw)? Is there a more modern alternative that you (or anyone) can recommend?


_The Art Of Software Security Assessment_ is the current canonical text.

Here's a reading list; I'd add Zalewsky's _Tangled Web_ to it, but change little else: http://amzn.to/cthr46


The textbook covers subject matter that won't become outdated: reverse engineering, how to craft malicious input, etc.


Sounds like a very cool course.


Reading stories/incidents like these makes me believe that education as a whole is stapled for reinvention. As they say: competition doesn't kill your business; attitude kills it.


Agreed. The vendor involved with the security problem was quite pleasant to deal with, of course. It was just the bureaucrats that were worried/afraid/stupid/whatever.


Policy IS the policy.


This sort of thing scares me. One time I found a security vulnerability in a popular forum I frequented. I emailed the site owner, and he thanked me and fixed it. Later someone else discovered another weakness and used it to post spam; the site owner emailed me asking about it. My initial thought was that he suspected I was the one doing it, but it turned out he was just trying to see if I could help him.

That scared the crap out of me though and I realized this was a VERY bad idea. Something as harmless as trying to help someone make their website more secure can get you more jail time than robbing a bank.

I also, completely accidentally, logged into another student's account at my university (a big university too). The school gives you an ID number. Your initial password is the same as this ID, and you're supposed to change it later. I didn't remember my ID correctly, swapped two numbers in it, and ended up in someone else's account. Home address, phone number -- all sorts of information staring me in the face. Will I report this issue? Heck no!

It's weird how many of these I discover by accident. My school also had a hackathon hosted by eBay and PayPal. In fact, one of the programmers from PayPal was there. During the hackathon, I stumbled upon a way to get account information without authentication (security tokens were being seriously misused). The PayPal guy was shocked and asked me to send him all the information on what I had found. Never did get any sort of reward out of that... (and I lost the hackathon too).


> more jail time than robbing a bank

This meme of "more jail time than robbing a bank" needs to end.

The federal penalty for possessing a firearm while robbing a bank is a mandatory minimum of 5 years and a maximum of life in prison. The mandatory minimum means that a judge could not sentence an armed bank robber for less than 5 years for each bank robbed while holding a gun (you don't even need to show it; just having it is enough). To make it worse, each 5-year gun sentence must run _consecutive_ with each other sentence (ie., be added on after you serve the other sentences). [1] If you brandish the gun, it becomes a mandatory minimum of 7 years, and if you fire it you get a mandatory minimum of 10 years [1].

Contrast that to all of the hacking charges we've discussed recently where the mandatory minimum is zero (a judge could sentence a convicted defendant to no penalty, or to probation).

To go further, the US Sentencing Guidelines [2], which are all-but-mandatory for federal judges (there's a constitutional out, but in effect most defendants are sentenced according to the Guidelines) gives "wire fraud" a base offense level of 7 (of 42+), which gives a sentencing range of either 0-6 months or 4-10 months, depending on how much economic harm is caused. Compare that to robbing a bank, which is a base offense level of 22, brandishing a firearm adds +5 for an offense level of 27, and if you actually make off with any cash add another +2 for an offense level of 29 (of 42+). The sentencing guidelines call for a sentence of 87-108 months (7-9 years) for a first-time bank robber, per bank, assuming that nobody gets hurt---plus the mandatory additional 5+ years for having a gun.

Realistically, bank robbers face a lot more time than even malicious computer criminals.

[1] See section (c) of 18 USC 924 http://www.law.cornell.edu/uscode/text/18/924

[2] http://www.ussc.gov/guidelines/index.cfm


> "The federal penalty for possessing a firearm while robbing a bank is a mandatory minimum of 5 years and a maximum of life in prison. The mandatory minimum means that a judge could not sentence an armed bank robber for less than 5 years for each bank robbed while holding a gun (you don't even need to show it; just having it is enough).

What's more, you don't even have to have a gun for it to be classed as "armed robbery". In the UK, just the threat of having a fire arm is enough (you could be brandishing a water pistol or even just making a gun gesture behind your unzipped coat).


This seems more like walking up to a teller and asking nicely in a clever way if you could have all the money. Is it even a crime if the teller responds positively to your request?


My suspicion is that yes, that would be a robbery if you ask in such a way that the teller actually gives you money.

You could ask in such a way that it comes across as a joke ("Anything more I can do for you today sir?" "A million bucks and a winning lottery ticket would be nice"), but if it comes across as a joke then the teller isn't going to give you any money.. because they think it is a joke.


I think that is a reasonable interpretation, but sets a scary precedent. If you are selling something and I, the buyer, say "I'd really like to get this for free" and you respond, "okay, it's yours!" Can you come back and call on me being a thief later?

> if it comes across as a joke then the teller isn't going to give you any money.. because they think it is a joke.

I'd also add that vast majority of malformed requests are denied. Only computers who have a sense of humour, so to speak, comply to the abnormal requests. Computer security is much closer to this scenario than carrying a gun, I feel.


Yeah, in the real world there are a lot of more factors to consider than just the wording. How threatening the victim/potential victim feels the other party is being is hugely important.

For example, there is a world of difference between a panhandler asking you "Hey, can I have a couple dollars" in a populated touristy area during the day, and the same panhandler following you for several blocks at night before asking you that in an ally. One is just panhandling, but the other is effectively a mugging.

Computers don't really have those sort of cues, so it becomes difficult to make reasonable comparisons between the two.


There's no nice way of saying you have a gun.

Also handling stolen goods is a crime. So even if you didn't personally rob the bank, if you know the money is dodgy then you shouldn't accept it.


Yup, same in the US.


Good bank robbers use digging equipment, not guns...

http://www.spiegel.de/international/zeitgeist/berlin-bank-ro...


This is the epitome of a bike shed discussion. You surpassed the parent post length to demolish a throwaway hyperbole. Stay on topic.


People have successfully robbed banks with just notes. The penalty for witch can be less than 5 years depending on the note.

Not that robbing a bank is all that profitable vs. the risk and penalty's.


Well, bluntly, I was just exaggerating.


Tell that to Aaron Swartz, oh wait...


This phenomenon isn't unique to computer crime. The other day my iPhone was stolen from my car in my apartments parking garage. I forgot to lock the door. I noticed that the guy who parks next to me (we have assigned spaces) also left his door unlocked. I was going to leave a note suggesting he remember to lock his doors because something was stolen from my car, but I thought better of it. If something was stolen from his car, do you want your note to be the only piece of evidence of what happened?



Had a very similar experience at my university. The library worked in the same manner as you described -- the login was a function of your student ID number, and the password was initially the same.

I wondered about the security of that solution, so I checked some random ID numbers to shockingly find out that about 80% of people didn't change their passwords! (I don't remember if you were actually prompted to change it upon first login, or you just had to do it by yourself). I could log in multiple times from the same IP to different accounts.

I hesitated whether to notify someone about it, or to loan a copy of "Mathematical analysis 1" or sth like that for some 100 people in the middle of the holidays within half an hour. That would be hilarious, but they would inevitably throw me out the university if they found out, so I didn't risk the action, neither notifying anyone due to the horror stories here and there.


I dealt with a situation at a college internship. The company was designing a marketing campaign for Nokia, but we were having major problems with the firewall software, which made for a very flaky Internet connection.

Long story short, my manager disabled the firewall and we were hacked that night. I was let go the following day unceremoniously. I discovered soon after that the company blamed me for the attack, saying I turned the firewall off and hacked the servers myself.

The school immediately started expulsion proceedings without even contacting me. Fortunately, my advisor personally addressed the issue and had everything dropped. The drama only lasted a few days, but the schools brain dead response to the issue gave me zero confidence in their ability to review anything objectively. I was so disgusted I refused to walk in the graduation ceremony, much to my parents disappointment.


Unauthorized security testing == Malicious attack

The actions of Mr. Al-Khabaz were unlawful and unethical. If he only accidentally found the flaw and reported it to the responsible person, things would be fine. But security testing without the permission of the system owner is the same as unauthorized access attempt!

I work as a security professional for 7 years, and I recently did a guest lecture on the college discussing the example like this. Most students were not aware where the problem is. Maybe it would help imagining how would story like this look in the physical world: Let's suppose you come back home and find someone picking on your door lock with a lock picking tool. You ask him "what are you doing?" and he says "I'm just checking is your lock safe. I do it for your security." Would you believe him? Or would you call the police immediately, without asking him anything? Let's add to this that security testing tools can sometimes degrade the tested system's performance or sometimes even crash it. In this case, it's not just unauthorized access attempt, but successful denial-of-service attack!

Never, ever, do a security testing of the system without the written permission of the system owner. If you get the permission, you will probably be asked to sign an NDA in return. You will also need to provide some information, like source IP address you're using and emergency contacts that can be used to stop the testing in case of problems (like crashes, etc.). This is the only lawful and ethical way to do these kind of procedures on someone else's system.

I'm not discussing if the penalty is OK in this case. It really doesn't matter if most people here cannot tell what he did wrong in the first place.


Malicious definition: "motivated by wrongful, vicious, or mischievous purposes", so it doesn't look that what he did was malicious. Also, unlawful? please quote the Canadian law that he broke, even in the US IANAL but the law mentions a vague "unauthorized access", has anyone ever been charged or convicted for running a vulnerability scanner like Nessus?

Not that I disagree with you: always ask for permission in writing from an authorized person before performing any kind of scan or security testing.


I explained my point below in more detail regarding the equation and why I think it should be remembered.

When someone is scanning your system and you haven't authorized it, you will definitively treat it as malicious. In a given moment, you don't care about attacker's inside motives, because your system is under attack and you better act accordingly.

I know a story about a guy who lost his job because of the unauthorized Nessus scanning in his company. Every story with a convicted hacker has some kind of a scanning tool (at least nmap) that was used in scanning phase, you can bet on it. Every scanning tool is an attack tool. In fact, scanners are most useful tools for any kind of attack, because they minimize amount of manual effort needed.

I don't know much about Canadian law, but most current laws forbid unauthorized access and _atempts_ of doing it.


I think what the GP meant was something along the lines of "Unauthorized security testing is indistiguishable from Malicious attack", in the sense that you cannot but expect the administrators of the system in question will react in alignment with their own goals. And you really have no control whether they perceive you as an ally or a threat.

Orthogonal to this fact is the question of what happens when an authority is brought in to solve the conflict. And something young hackers need to learn as early as possible is that you are not entitled to a due process in every possible context. It would be unlawful if you were not given the chance of a just trial in the context of a criminal or civil lawsuit, but this does not translate well into private institutions.

In particular case of a student unauthorized access within a university, this problem is compounded by the fact that such University and its representatives play the rules of prosecution, judge, jury and (sometimes) defense. You also have to consider that the people doing this are not professionals of law procurement but are pulled out of their real jobs to sort out some random mess, thus the only constrain is their common sense. I've even heard the first hand report of a case in my university where the faculty member supposedly playing "defense" was the most gung-ho about giving the boot to the guy in question (who ended up getting a one term suspension, but got to keep his scholarship, so it could have gone much worse).

This is probably not "fair", but it is the way it is and nobody seems interested enough to make it change. Education has a number of stakeholders with sometimes conflicting preferences and goals, so this is not a trivial problem.

But the point is that once your actions put you in the harms way, the abstract concepts of "fairness" and "proportionality of the punishment" are academic at best. My opinion is that legality is the bare minimum standard society imposes to keep barbarism at bay, but it is pretty rough itself. So it is in your best interest to conduct yourself in such a way that appeals to "the rules" happen as little as possible.


>"Unauthorized security testing is indistiguishable from Malicious attack"

Of course it's distinguishable. Testing comes before attacking, to provide information. The two are otherwise completely unrelated. It'd dead-easy to distinguish between someone poking your fence and someone stealing your jewelery, for example.


If your test to see if you can pick a lock is actually trying to pick the lock, then the test can of course be indistinguishable from attempted burglary. If you were caught in the act, any defense will be suspicious. However, if you confessed of your own free will, there is usually no reason to suspect criminal intent.


If you're caught in the act, sure. But they called him about it well after the actual actions. That's solid evidence then that he left after entering and logically did not use the entering to commit a crime.


You are willingly missing the point here. It is human nature to assume malicious intention, even if it is wrong. And if there's no a strong motive to provide a due process and investigate, malicious intention will will be assumed.

If a random male servant is found to have gained unauthorized access to the princess' chamber, torture comes first and beheading comes last. In-between questioning regarding his intentions and the degree of fulfillment is optional.


You don't do that if you have video evidence of him entering, standing there for 15 seconds, and leaving.

There is a huge difference between catching someone in the act of breaking in, where it's reasonable to assume malicious intention, and noticing that someone entered and left, where you can see that they didn't do anything malicious.


>'Unauthorized security testing == Malicious attack' I don't agree with that. Although I do think that unauthorized testing is unethical and you should get permission first, but treating it the same as successful attack and punishing the same is wrong. The main difference is intention. And Mr. Al-Khabaz notified relevant authorities and did get thanks at first. If we compare this case to your example about locks, I'd say that Mr. Al-Khabaz walked around your house, saw the broken lock on your back door, then came to your front door, knocked and told you about that. Maybe you may wonder why he would walk around your house in the first place and accuse him of being weird, but can you accuse him in breaking in and stealing?

P.S. Since the author of the article is known for partnering with students defending organizations, the whole story can be one sided, and it would be good to judge after hearing another side. E.g. it could be not the first issue, or there's traces of something more than just security inspection.


You missed my point. Like I said, I'm not commenting the penalty. In my opinion, it's too hard. But this is only my opinion after hearing (just like you said) just one side of the story.

The main problem with unauthorized testing (putting aside technical problems) is that person who performs it is in _very_ difficult position explaining her intentions. She already did what is considered the _second_ stage in hacker attack. Until she can prove her good intentions, this is rightfully treated as a malicious attack.

This is what my equation means. I think everybody on this forum should be aware of this. Don't get yourself in trouble for not knowing this.


> She already did what is considered the _second_ stage in hacker attack

Considered by who? There's companies which pay you money if you can find bug in their software. And that's open offer, they don't say 'wait, we'll get ready at 8 p.m. friday and then you can check'. What do you think would Google do, if this student used scanner(or something else) on gmail and found bug and then told Google about it?

I still think that intention is key difference here. And as you said 'that person who performs it is in _very_ difficult position explaining her intentions'. That's why you shouldn't do any unauthorized checks, because even if you wanted to tell about your findings to the relevant authorities, you can be caught before that and then you'r screwed. But Mr. Al-Khabaz informed university/company and was initiator of that talk, so it kinda clears him. He was able reasonable explain his intentions and his punishment could be just some warning(of course if there's no any significant moments we don't know about). Also he didn't get any credit for help he did by finding the bug.


Scanning is the second phase of the standard hacker attack procedure. Phases of hacking:

Phase 1—Reconnaissance Phase 2—Scanning Phase 3—Gaining Access Phase 4—Maintaining Access Phase 5—Covering Tracks

Regarding this guy's intention, you're probably right. The main reason why I'm commenting here is that guys with good intentions don't get themselves in the trouble for not knowing what they're doing.

Finding vulnerabilities in software on your machine and hacking other people's systems are entirely different things. By testing software you're not violating anything (except maybe EULA for some licences). By hacking other people's systems, you're committing a crime.

> What do you think would Google do, if this student used scanner(or something else) on gmail and found bug and then told Google about it? At first, they would treat it like an attack. Like almost any other company would do. I have no idea what would happen later.


But you wouldn't call reconnaissance hacking, would you? That's just vaguely looking at the site and information about the company. Step 2, pointed at something like a webserver, does not connect to any systems the person is not supposed to have access to. Only step 3 crosses the line.


Good point, I wouldn't call reconnaissance hacking. For two reasons: 1) It's a passive method 2) It's not done on the attacked system.

Scanning is an active method and it's done on the attacked system. Web scanning is not the same as web crawling (downloading pages of the site). It include all kinds of invasive tests, like SQL Injection, XSS, command injection and other attack attempts. It can cause many kinds of problems, named here in this thread.

From security perspective, scanning is an attack. Everyone who uses these tools should be aware of this.


Companies paying bounties for bugs are explicitly giving you the right to pen test their applications. This changes nothing in terms of unauthorized scanning = malicious attack.


You are probably correct that what he did is probably unlawful (Canadian law is usually fairly close to US law), I disagree that it was unethical.

In a general sense It's not difficult to find instances of behaviour that, while lawful are far from ethical, so those to things don't necessarily travel together. Some examples: http://en.wikipedia.org/wiki/Sexual_Sterilization_Act_of_Alb... http://en.wikipedia.org/wiki/Canadian_Indian_residential_sch... Obviously this could be a long list...

In this specific instance it seems that his information was exposed by this flaw along with everyone else's. Wanting to verify the safety of your own information feels like a pretty reasonable and ethical thing.

I think I would rephrase your example a little: "Let's suppose you let someone store their stuff at your house you come back home and find them picking on your door lock with a lock picking tool. You ask him "what are you doing?" and he says "I'm just checking is your lock safe. I do it for your security." Would you believe him?"


A analogy even more accurate to this case would be: "Let's suppose you let someone store their stuff at your house, and they have previously pointed out a problem with the lock. You come back home and find them picking on your door lock with a lock picking tool. You ask him "what are you doing?" and he says "I'm just checking the lock I said you should fix is safe. I do it for our security."


There are many standards of ethics. I am talking about professional ethics in information security. Example of this: https://www.isc2.org/ethics/default.aspx

If you are in business of finding vulnerabilities in IT systems, you should be aware of it. If for noting else, to save yourself form situations like this.

This guy is not a security professional (yet), but running vulnerability scanners on other people systems definitely puts him in context.


http://www.acunetix.com/blog/web-security-zone/should-you-te...

what can happen when production Web applications are tested including:

Email floods

Junk data inserted into databases

News feeds filling with random input

Log files filling up

Accounts getting locked out

Internet bandwidth consumption

Scans that take longer to complete

High server and database utilization

Incident response teams and managed security providers having to deal with alerts

Final cleanup needed after the fact


Still, all those things are caused by bugs in _your_ software. And all of that can be caused by regular users just hitting one of the bugs.


No they are not bugs, in any way, shape or form. I think you are missing the technology and ethos of website design here.

Web scanners do massive offensive attacks. They basically DOS attack your site in many ways, trying millions of attack vectors.

Mitigating against vandalism is very hard. It hurts users the more you do. Generally you leave it as open as possible and it is ok, since it's not a security issue per se and most sites can live their lives never having been attacked this way.

There's no money in vandalism and unless you piss off skilled or determined people it won't be abused.

Someone could write a script to cause thousands of $ damage to wikipedia without much trouble. But wiki chose's to leave itself open and take the risk. They don't have a bug. They are trying to do the right thing by users.


> No they are not bugs, in any way, shape or form. Maybe not all, but some of them are.

I think you mentioned different issue here, puerto called unauthorized check 'unethical' and you talk about performance. If Mr. Al-Khabaz used some noninvasive scanner, which didn't bring any serious technical overhead, is it ok by you?

> Someone could write a script to cause thousands of $ damage to wikipedia without much trouble. But wiki chose's to leave itself open and take the risk. They don't have a bug. I don't really understand what do you mean when you say 'open', open to what? But I think wiki has some protection mechanisms, because at their scale if someone could easily bring them down, someone would.


As per my link to a direct article by the makers of the scanner he used, it is invasive. What more do you want?

Yes, passive scanning is fine with me, it's probably legal in most countries, but this is not certain (See Google and wifi). But I don't see the relevance to the conversation.

Passive automated scanning is fairly useless so it's not really used.

Fact is he broke the law at a criminal level and caused damage, if you can't see this, you really have no idea of the reality of the technology he was using.

But what should happen to him for it is a discussion for a different thread.


I agree. But _any_ kind of hacking exploits some kind of a vulnerability in the system. The presence of the bug doesn't give you right to exploit it.


Yeah, those things also.. ;)


It's his own data in the system, which makes this completely different. In your lock picking example, it would be a landlord finding one of their tenants picking their flat's locks.


"It's his own data in the system, which makes this completely different. In your lock picking example, it would be a landlord finding one of their tenants picking their flat's locks"

More accurate would be catching your tenant picking every single apartment's lock to prove that their personal lock is vulnerable.


Assuming the vulnerability scanner tries some basic login attacks (for example, trying default username/passwords), then it would be analogous to a landlord finding one of their tenants trying to pick their neighbours' locks, and that of the building management office.


So you think you can do the testing of any system that contains your data without prior permission?


No it's more analogous to him trying to break into a bank vault because it has his money.


If by breaking in you mean walking in through open, unsecured doors...


You are overlooking the fact that Al-Khabaz informed the system owner 2 days prior of the problem. Thus, you can not claim the actions of Mr. Al-Khabaz were definitely unlawful and unethical, that remains to be seen. This is not a black and white issue.


"You are overlooking the fact that Al-Khabaz informed the system owner 2 days prior of the problem"

Warning the system owner doesn't give you the ability to run pen tests if they do not wish you to do so.


"True, but it makes the case quite different in legal and moral scope from one in which the system owner is not warned"

I would believe that it would really only make a difference if the systems administrator replied to your warning with acceptance and an invitation to do so.

Morals being subjective, how do you feel it would change the legal conditions?


"A warning removes malicious intent. Lack of warning leaves malicious intent in place."

The trespassing, using a system in nonstandard ways could still be considered "malicious", even if the user's intent was not. (I'm not making judgments on the guy so much as imagining that prior warning is not sufficient.)


I don't see how a reasonable person would conclude Al-Khabaz's actions were malicious. People with malicious intent do not draw attention to themselves prior to the event, nor do they advertise the exact attack that they will use.


You're still stumbling through systems you are not explicitly invited into. I understand why you might feel that good intentions validate the act, but assuming that all administrators are so gracious would be dangerous :P


No, not "validate", that would be equally black-and-white thinking. But a decision of the legality and morality of the action should take into account the whole circumstances, not just the bare fact of the unauthorized access.

This seems similar in many respects to the Aaron Swartz case. My initial response rejects the idea that all actions regardless of motive should be taken as equally unlawful and unethical.


A warning removes malicious intent. Lack of warning leaves malicious intent in place.


True, but it makes the case quite different in legal and moral scope from one in which the system owner is not warned.


I agree with this if you get rid of any references to morality. Can you explain how a vulnerability scan would be considered morally equivalent to a full scale attack?


Sounds like he was using an automated scanner as well. That's a stupid thing to do and he should be in trouble.

I'm not sure he should be expelled, but definitely reprimanded.


I've said this before -- don't bother being a "white hat".

The industry and the legal system doesn't have a pigeon hole for that. You'll be labeled as "hacker" (and not in a positive sense of it). Either disclose the vulnerability immediately to get recognition, hoping it is public enough they'll be ashamed of going after you, or or sell and profit from it. You are already treated as a criminal by these large institutions, so if you go in that direction might as well make some money.


During undergrad I discovered the university's blackboard-like site sent plaintext passwords over http, and the majority of its use was over wireless. I went to the IT office responsible for the site, told them about it, and refused to give my name when they asked. After reading some of the horror stories on this page, I feel really lucky that the IT department didn't go further to figure out who I was and get me in trouble. End result was that quickly afterward their site forced https on you...


Man. I found an XSS bug in the University of Washington's web portal several years ago. It would allow a hacker to impersonate any user if they clicked on a crafted hyperlink.

After testing this on my own account, I reported it right away to the university. They thanked me and fixed the problem within days.

But after reading these horror stories, I feel extremely lucky that they didn't do something much stupider. My entire academic career could have been destroyed, as well as my professional one if they'd decided to press frivolous charges.


The fact that they went https tells us you would probably be okay.

People who go after security bug reporters tend to never fix the bugs in question. They're, like, too righteous for it.


Probably, but not necessarily. They could easily harass you saying that the cost of such an upgrade (probably actually measurable only in the effort of some salaried employee..) are damages that you caused.

edit: see the $800k 'damages' Gary McKinnon allegedly caused. It's not like he smashed their equipment with a sledgehammer or something.


Its certainly a grey area and covering all your bases legally before embarking on a penetration test would be good idea. Even with all the legal formalities, there needs to be a good level of trust between the client and the auditor for things to go smoothly.

Two days later, Mr. Al-Khabaz decided to run a software program called Acunetix, designed to test for vulnerabilities in websites, to ensure that the issues he and Mija had identified had been corrected.

If you find a security flaw in a system and report it, receiving positive feedback doesn't automatically imply that you have permission to conduct further tests. A web application vulnerability scanner can cause damage to production systems.

Almost anyone can just download a scanner and run a wild test using default settings. But its illegal to do it without prior authorization.

While his intentions were good, I think it was a bit naive of him to take upon himself the responsibility to make sure the flaws were fixed and conduct a test. Even when you have permission to conduct a test you stick to the scope and limits of the agreement. You cant just keep leapfrogging networks as you find holes.

Manually finding holes/bugs accidentally and reporting them is different from running a vulnerability scanner.

I dont think he should have been expelled without giving a chance to explain his story and the way they did it was not ethical. The management over reacted, especially considering there was no damages mentioned in this case.

http://testlab.sit.fraunhofer.de/downloads/Publications/tuer...

http://www.coresecurity.com/content/under-attack

https://en.wikipedia.org/wiki/Randal_L._Schwartz#Intel_case


  > While his intentions were good, I think it was a bit 
  > naive of him to take upon himself the responsibility to 
  > make sure the flaws were fixed and conduct a test.
Given that his own personal information could have been exposed by this exploit, it's just as likely that he was acting out of self-preservation rather than merely due to feelings of personal responsibility. The only naive bit here is that he obliterated his plausible deniability via 1) not allowing more time between submitting the report and attempting the scan, and 2) not masking his IP behind seven proxies.


Yes, it was naïve, and maybe unwise, but the curiosity would be hard to resist. I might have done the same thing in the same situation, at one time. (Now I'm old and soulless.)


Agreed. While he may say he was trying to verify the flaw was fixed, that just doesn't coincide with running a general purpose vulnerability scanner against their network.

While I doubt his intentions were malicious, it certainly seems like he got curious / excited from his first find and went looking for more.

With that being said, I definitely feel for the guy. I can certainly understand the intrigue and curiosity that would lead him to continue his exploration. It sucks that they decided to bring the hammer down so hard.


It sounds like he may have been trying to find more flaws.


I'd revise that to don't be a hobbyist "white hat". If you want to do white hat hacking either get yourself hired by a serious security outfit or set yourself up as a company doing security consulting. On the whole people are far less likely to go after a genuine security company as opposed to 'just some kid in a basement'.


True, but then sites who are cavalier about security are unlikely to be the ones who think of hiring professional pen-testers.


> You are already treated as a criminal by these large institutions, so if you go in that direction might as well make some money.

In the second scenario, you probably are hurting innocent people.

So if you have a moral compass, you should maybe bother being an anonymous white hat.


The one does not imply the other. Becoming a criminal is a bad idea (and rtdsc probably knows this and was engaging in hyperbole out of justified frustration), but becoming a martyr is also a bad idea. If you find a security flaw in a system you don't own, the best course of action is to ignore it and get on with your life. This is something every bright young hacker needs to be made aware of.


Being anonymous isn't always easy unfortunately


Maybe on your own, but if you manage to brand yourself as a consultant or get hired explicitly as a white hat, you can do it. Good example would be penetration testers.


Agreed, when SQL Injections in ASP were all the rage some 10 years ago I contacted a couple dozen companies to inform of their full credit card visible customer admin pages and asked for nothing in return (at that time someone was offered money to help fix a security breach and was arrested for blackmail -- the employee that offered the money for services was actually the police speaking to him, so that saved my ass too) and I got a ton of threats, only one company actually gave me a number to call and thanked me but when I asked for a postcard of their city he got really pissed. Good times.


You can also pastebin it. That's what you should do.


Perhaps from a tor connection?


How do new pastebins get discovered? I've never used the service - was assuming someone should post the link to the pastebin on Reddit?


Yeah, Tor->Reddit should work. Alternatively you could fire off some emails to a couple high-ish profile twitter accounts of people/groups that would be interested in taking credit for it.


Yes, someone should. You can imagine some fun ways of doing so.


Ahmed, if you're reading this, sorry about your college acting like idiots. If finishing college is important to you, I'm sorry they've made it so difficult.

That said, please don't think this is going to end your career. There are a lot of companies and startups that would love to have you for your kind of initiative. Not having a degree that you don't seem to need anyway will not be a sticking point with them. And the option of starting your own consultancy is a possibility - you already have some publicity that can help with initial gigs.

If you'd like to try your hand at a job, do check out ThoughtWorks (www.thoughtworks.com). We don't usually stand on ceremony or make a fuss about qualifications.


I have to second this. Start sending your resume out and include a link to the story.

We're a little far away (Australia), but otherwise you'd get in the door for an interview at the very least.


He's technically still in Québec's equivalent of a US high-school 12th grade. Since he's 20, he can wait a year and be accepted to a University.


No, cegep has either 2 or 3 year programs. Year 1 is equivalent to US high-school 12th grade. Year 2 of 2-year programs is equivalent to 1st year university for B.A. or B.Sc. 3-year programs tend to be terminal degree of a more "technical" nature.


Je suis au courant. Je croyais qu'il était en première année de CEGEP.


Even aside from the fact that he was acting in good faith and did not cause any damage to persons or property (as acknowledged by the software vendor), the procedure used to expel him is woefully lacking. I sat on the highest student discipline tribunal at my (Canadian) university and an expulsion for non-academic reasons - which had to receive final approval from both the President and the Governing Council - would only be recommended in cases involving egregious and likely criminal misconduct and only after the courts had found merit to the allegation.

Furthermore, any student faced with potential expulsion would have been entitled to a series of quasi-judicial hearings and assistance in preparing their defence. To expel someone for non-academic reasons from a publicly-funded institution (which Dawson is) should not be taken lightly and surely never in a fashion where the accused is not permitted to present their case.


It was also really crappy cover-up strategy on the school's part. By refusing due process to Al-Khabaz and expelling him with zeroes for his last semester grades, Al-Khabaz now had nothing to lose exposing both the security flaw and the injustice to the press. If they didn't play all their cards at the same time (like putting him on probation or something), he probably wouldn't have gone public.


In all honesty, it is all of these reasons that make me believe that we're not hearing the entire story.


The CS faculty at Dawson (less one) should be embarrassed.

This happened to me twice in college, minus the expulsion part. In the less interesting case the University sent around a form to be used in nominating student speakers for commencement. It included a drop down that was keyed off of student id. Student ids were regarded as private.

The school required everyone to either buy health insurance from them, or provide proof of insurance. They had a webapp where you could report this data. The login required your student id, name, and birth date (thanks Facebook). If you visited the app after using it, the form auto-populated with your health insurance information. I brought it to the attention of the University and they took down their nomination app in a matter of minutes.

In the more exciting incident, someone at Sungard called my university and asked them to have the campus police arrest me. (Edit: Quite boring, really http://seclists.org/bugtraq/2008/Jan/409)


"The CS faculty at Dawson (less one) should be embarrassed."

Now they are.


What's upsetting is the 14/15 professors who voted him to be expelled. Do computer science professors not understand the concept of white-hat hacking? Shame on them.

What message does this send to other students at Dawson? Don't be curious; don't go out of your way to do a favour for the safety of your peers; keep your mouth shut and we'll hand you your degree.

Someone give him a scholarship to a legit university!


> Do computer science professors not understand the concept of white-hat hacking?

Unfortunately, if they were at all competent they wouldn't be teaching at a place like that. CS programs at minor universities are notoriously poor and staffed by whoever they could get, and it's not going to be anyone that can make decent pay working on current technology.


Dawson isn't a university. Its a CEGEP. In Quebec high school only goes until grade 11, after which most students do 2 years at a CEGEP before going on to university. It replaces Grade 12 and first year of university.

http://en.wikipedia.org/wiki/CEGEP


Perhaps CS is an exception, but I was under the impression that jobs in academia (in general) were in woefully short supply.

While I'm sure they wouldn't get the cream of the crop, there's reportedly an excess of under-employed & under-paid PhD's and post-docs in a number of STEM fields (again, specifically in academia).


CEGEPs are kind of a combination community college/last year of high school/first year of university. They are teaching institutions, not research ones. US community colleges can demand Master's degrees but not Ph.D.s to teach. Mostly people with Ph.D.s who can't get real academic jobs exit that market, not go CC.

Anyone who is actually teach a CS course at a CC or a CEGEP and who is doing it as a full time job is doing it for non-pecuniary reasons, inclusive of being incompetent but having attained a qualification sufficient to teach.


CEGEP teachers don't do any research and aren't really considered academics. The hiring requirement for Dawson is a Master's in CS + 2 years experience, and that requirement can be waived down to a college diploma (DEC, that's less than a bachelor's) if one has enough industry experience to justify it.


They were professionally embarrassed. Hence the agressive stance towards him.


Back in 1999 when I was a freshman in university, my school had a server for students to host their websites on and use Pine for email. The server did not give shell access... but then there was a security hole in Pine that would allow you to run chsh. So I did that, and got shell access. I think the worst thing I did (other than running ls in a few directories) was use it to connect to IRC.

Since I wasn't really trying to hide anything, so one of the IT guys must have seen me with shell access and reported me. My punishment was having my ethernet turned off in my dorm room (even though the incident occurred in a computer lab while the dorm's ethernet was turned not ready for use yet). I appealed the decision and met with the Dean, and she said I was considered a threat to the school so I should be happy that my punishment wasn't worse.

Anyways, the rest of the year in the dorm was spent playing a cat and mouse game. I used my computer on my roommate's LAN port, so they ended up shutting off his ethernet as well.. I felt bad about that, especially since they refused to give him internet access for the rest of the year. So I ended up making a 50 foot ethernet cable and running it through the bathroom into another person's room (Two 2-person dorm rooms were connected by a common bathroom). That got shut off, so I bought a new LAN card (to get a new MAC address) and connected to another ethernet drop. I was able to get online for the rest of the year, but that sure left a sour taste in my mouth for my school.

Edit: I remember one close call... over a break (I was one of the few people in the dorm), water came out of the shower drain and flooded our rooms. I came back from spending the day out to see the Dean going into our room to inspect the damage, and I quickly had to hide my 50 foot cable that went through the bathroom.


Was MAC spoofing not doable in 1999?


and sadly it won't be in the future. New Intel-wifi cards have them blocked[1], their new drivers even go out of the way to modify/intercept Windows from doing it from the software side. Won't be long until other manufacturers follow suit.

[1] http://www.intel.com/support/wireless/wlan/sb/CS-031081.htm


Sounds like we need a tor-like protocol for Ethernet.


Dude. F. You. Intel.


Entirely believable. I don't have a timeline for you, but I do know that even only a few years ago it was not ubiquitously supported.


It was, and I did use that to verify the new ethernet drop worked, but I would have to spoof it 24/7 for ~6 months. One slip meant losing the internet. So I thought the $10 on a LAN card was a good investment.


Some guy told me that it depended on the card that you used, some cards apparently had eeproms that you could reprogram without too much trouble.


Is MAC spoofing the same as changing your MAC address? Because I change it quite frequently, but I don't see it as "spoofing"


There really needs to be legal protection for acts of white-hat hacking like this. Both protection from prosecution, and protection from reprisal. This kind of stuff isn't going to stop happening unless the act of finding and reporting a security vulnerability becomes legally protected behaviour.


Perhaps the existing whistleblower protection could be used here?


The problem is the that would provide a legitimate cover story for black hats. "Oh I was just doing a white hat scan".


Here's the thing: black hats are always scanning you. Where I work, a fairly low-key place, we're currently being scanned on some of our ~100 Internet-facing IP addresses with a frequency of 15 requests per second. This is nothing uncommon. We get people on our guest network scanning us from the "inside" as well (they think they're inside, at least. They have a 10.x.x.x number, they're inside, right?)

Point being, if you can't hold up to a white hat scan, you're likely already hacked. Security is how you enforce your policy. But it's only white hat until data is compromised, and that's where the prosecution comes in.


That's not a justification for punishing white hats.

In the meantime, until we can make this understood, we need to make the workaround understood: if you find a security flaw in a system you don't own, and you haven't been formally hired for the specific purpose of finding that flaw, ignore it and get on with your life; it's not your problem. Going out of your way to help people in normal circumstances is noble. Going out of your way to help people who will reward you with a knife in the back is a mistake. Don't make that mistake.


The title is misleading. He wasn't actually expelled for finding the flaw; he was expelled because, after reporting the flaw, he ran an exploit program on the school's server without permission, allegedly to see if it had been fixed. Had he only reported it, he would not have been subject to any disciplinary action.


Well the article is basically written by the student union, so they try to get people on their side I imagine.

"Ethan Cox is a 28-year-old political organizer and writer from Montreal. He cut his political teeth accrediting the Dawson Student Union against ferocious opposition from the college administration and has worked as a union organizer for the Public Service Alliance of Canada."


Stupid, but hardly deserving of expulsion. Especially given prior evidence of his character in reporting the flaw.


He ran an exploit FINDER. He did not put exploit programs on the server.


You think that's not against the law?

ie: http://security.stackexchange.com/questions/14978/is-scannin...


We're talking about morals and ethics, not what the law says.


So the fact that the submission title is misleading makes the university's heavy-handedness easier to swallow?


I didn't say anything about whether the decision was justified. I only elaborated on the reason behind it.


It just means that whole article can contain more misleadings and be one-sided. Journalists... you know.


The article could contain that regardless of whether the title is misleading.


“All software companies, even Google or Microsoft, have bugs in their software,” said Mr. Taza. “These two students discovered a very clever security flaw, which could be exploited. We acted immediately to fix the problem, and were able to do so before anyone could use it to access private information.”

Yes, even Google and Microsoft have bugs in their software. This isn't an excuse to bully people who tell you about the bugs in yours. The difference between you and Google is that Google pays people who find bugs in their software, especially serious security flaws, even if they aren't employed by Google, rather than threatening them with legal action.


Most schools have an acceptable use policy for their students which covers unauthorized vulnerability probing and port scanning.

I can understand Ahmed's youthful curiosity about whether the vulnerabilities that he identified had been fixed...But he had handed off the info to the Dawson College IT team and the ball was no longer in his court.

Running Acunetix against the college's/SkyTech's server(s) was a pretty dumb move. But hell, when you are in your early 20s, that's when you are supposed to make dumb mistakes.

I'm all for teaching moments, but this "One Strike And You Are Expelled" issue irks me.

Ultimately, this is about Edward Taza of Skytech Communications being sleazy and manipulative by threatening a scared, inexperienced 20 y/o college student with expensive legal action and implying the possibility of jail time unless he signed a non-disclosure agreement.

The EFF should probably take a look at this.


Like most developers, I've stumbled into lots of security problems over the years. The first few times I attempted responsible disclosure, but that resulted in enough close calls that I simply don't report them anymore. I document them. Sometimes I might mention them to others who have an interest.

I would now never report a security flaw without a iron clad set of laws in place to protect the rights of white-hats, whether we are licensed and approved security researchers or not.


I nearly got expelled from High School and pegged with a felony my Senior year for noticing a vulnerability.


So why exactly did Tazo (The incompetent president of the company responsible for the security breach) mention "police" and "legal consequences" in his conversation if he wasn't making a threat.

If you are going to be a lying asshole and deny something, do yourself a favor and deny it outright. Don't try to imply that you were just having a friendly conversation about "legal consequences" right before you solicit someone to sign a non-disclosure agreement. No one in the world will believe you weren't trying to intimidate this poor kid into compliance.


seeing how we don't have the actual logs of the conversation, who knows what was actually said. This is the biggest problem with these stories: we only get information through very partial observers.


That's why I only mentioned what the President admitted to saying.


> The agreement prevented Mr. Al-Kabaz from discussing...

No, it didn't, because he was blackmailed into the NDA. It's completely unenforceable. It was signed under duress and only benefited one party.


You misunderstand the purpose of an agreement like that.

It's not like it magically binds your tongue. It just makes it easier to sue you if you violate it. The fact that the student could win in a suit is irrelevant. He couldn't afford the time and money to fight.

Before he signed the NDA, they would have had a harder time suing him. Perhaps he could have spent merely $10k and gotten it quickly dismissed. After, the company could make it arbitrarily expensive for him to fight it. If he could have eventually proved coercion (which I'm honestly skeptical of) then he would have been off the hook -- after years of stress and massive lawyer bills.


You're absolutely correct, I hadn't considered that.


Who in their right mind would think it's a good idea to use a penetration tool against their college?? The title is all wrong. He got expelled for using a penetration, not finding a flaw. He was congratulated for that! I heard someone else from the team even got some kind of prize for it.

Sensationalist journalism is what it is. After a little bit of research, I discovered it's written by someone who used to be in Dawson's Student Union, so I guess he has a teeth against the administration.

"Ethan Cox is a 28-year-old political organizer and writer from Montreal. He cut his political teeth accrediting the Dawson Student Union against ferocious opposition from the college administration and has worked as a union organizer for the Public Service Alliance of Canada."


Maybe the right response would be to legally punish - by fine - both parties.

After all, there is private data insufficiently safeguarded. Some poor girl could end up getting stalked if the right kind of sleeze came across this.


I think the college administrators are bullying this student because they are embarrassed.

The threats by the Skytech CEO Edouard Taza; the college not allowing the professors to hear the student before voting; his transcripts vandalized with zeroes so he cannot continue his studies elsewhere... What exactly is the relationship between Skytech and this college?

I've signed the petition to reinstate Hamed:

http://www.hamedhelped.com/petition/

Hamed, stick to your guns. You did the right thing.


I used to work at Skytech. We already had a case of a student discovering a flaw in our code while I was there and things went very smoothly. We contacted the student, he told us what the flaw was, we corrected it. Edouard made him sign a non-disclosure agreement and made him delete all the data he had gotten from our servers and that was the end of it. This student was a brilliant student with excellent grades just like Hamed.

Now why is this story different this time? I'm not too sure since I've left a couple years ago, but my guess would be that the college administrators have taken this decision. Knowing Edouard Taza, I doubt he would have pushed for this student to be expelled, since he clearly has a great future in software and could be one day employed at Skytech to fix even more security holes.

Edit : hadn't finished reading the article, it seems the professors decided to kick the student out : "Following this meeting, the fifteen professors in the computer science department were asked to vote on whether to expel Mr. Al-Khabaz, and fourteen voted in favour." To me what this says is their computer science department is full of idiots. Any good CS professor would have understood that Hamed didn't have any malicious intent.


No, what this tells me is that Mr Al-Khabaz continued trying to hack the server even when told to stop. Whats the difference between the reaction we all expect (including your story) and this? The difference is Mr Al-Khabaz continuing to try to break into the web servers.

He got kicked out of CEGEP. He'll survive unharmed. Sad that he thinks getting publicity is worth it though.


So.. here's something that happened to me in my engineering software university.

A friend of me just had a summer internship in a security firm and learned a trick or two. And, looking at the html/javascript code of a page, there was an obvious entry point that gave access to anyonela else account provided you had their student number (i.e. skip the password step).

So my friend showed it to me and I suggested he tell the IT department. Obviously, the next thing we know, he's accused of "Hacking" and get menaced by the IT department.

A couple days later, we check back the website and realize that a trivial encryption is added.. I.e. you have to reverse the student number or something like that. And, obviously, just on the client-side.

A little bit pissed, we decided to take our revenge of being menaced for just being nice. So we create a web page where it explains the story (That we found an entry point, that we told the IT, etc.) and then, we say "Try it!" [<enter student number>] which directly logs you in into their account.

We e-mail that page to the main directors of the school by suggesting a quick fix. And, we make sure to CC the IT departments.

The day after it was fixed and we received a real "thanks" from the authority. I guess the trick is to contact a higher authority rather than directly contacting the IT department.


I'm going against the general idea here, but the college issued a statement:

http://www.dawsoncollege.qc.ca/home

Basically, they say Ahmed did more than just what is reported in the article, and they can't publicly say what he did - because that's private info about Ahmed that they're legally obliged to protect.

Now I'm not taking a position in favor of the college or in favor of Ahmed. I'm just saying, it's not all black (or white). The National Post article is biased and we're missing some info. We should remember about that before going crazy on the witch hunt.


Perspective: That bit about protecting his privacy is the same sort of excuse Ortiz's office gave in their initial response to Aaron's death.


Might be used as an excused but they're indeed not authorized to disclose a student's mischiefs to the public, or any other info as it stands.


The site is now 403'ing but I'm really curious what else he could have done and didn't admit to for his story. Personally, this all makes sense right up until the point where the president of Skytech says Ahmed should not have run his tests but that he understands Ahmed was not being malicious. But then Skytech wants him expelled, and the university wants to protect Skytech's interests? Expelling him would get the story out and accomplish literally the opposite of saving face, even with his inability to disclose details.

Based on other stories of bureaucratic ignorance it's easy to jump on the administrative / cover-up blame train, but something about this doesn't quite mesh, and the fact that the story's only alibis are 1) Ahmed and 2) a generic students' rights organization makes it difficult to digest.


I graduated from CS at Dawson and know the faculty quite well. I had the same exact reaction as most people when reading the article up until the point where I saw that 14/15 of the faculty members voted in favour of expelling the student. That right there makes me wonder what else he did.

The faculty told me that there are other things that caused this and they are unable to discuss them with me.

I wish it were possible to get that information but I know them and I trust them.


I also graduated from CS at Dawson. I've been told that Taz and François Paradis know each other quite well and this is probably the result of said friendship. But this is all heresay.


That's interesting. I went to see them today and it didn't seem like they were coerced into expelling him. They seem to really have something that makes them strongly believe it was necessary. Gah, I'd love to know.


It might not be black or white. The kind of people who will abusively kick someone out to cover their ass are the same kind of people who will bend the truth later.

We just don't know.


I love the part of the story where the guy naively assumed that it would take his school less than two days to fix the vulnerability. In reality, would probably take them months.

How long did it take sony to fix their issues? Oh, right, it took someone to explose it publicly. Heh. It's unfortunate how broken some IT organizations are and that they would rather kill the messenger than fix things.


its apathy. The people "responsible" for the service don't actually care, and perhaps probably won't be punished for teh failure of the service. Hence, the vulnerability (and the publicity) only makes more work for them - therefore, they shoot the messenger as a form of blame/revenge.


And meanwhile, the student data is at risk on the Internet. Every org needs a better plan then that, especially when change management takes weeks/months and this requires immediate action.


It involves more or less humongous amounts of pre-meetings, meetings, post-meeting, legal documents, reviews of meeting, implementation strategy, review of implementation, certificating/accepting, post-...

You got the picture. In big companies it might take some time.

Essentially is is very broken system that destroys itself.

It is like you need a manager to watch over a manager that watches over a manager.

It is funny to work at such companies, I got fired from one when I said everything I think about them.


This headline is somewhat misleading. The student was expelled, not for finding and disclosing a security flaw (he was actually congratulated and thanked for this), but for later running a pentest software suite without permission to "verify" if the bug had been fixed.

That's not to say that the expulsion still doesn't reek of BS, but Ahmed's hands are not completely clean here.


That is probably just their excuse. I think it's quite reasonable to check if someone fixes a security flaw that puts your own information to the risk. It's like trying to open (without the key) the safe at the bank that has your money in it.


Try walking into the safe deposit box area at the bank absent escort or previous notification and see how that works out for you.

Again, the school is on record as giving him kudos for reporting the error - it's perfectly reasonable to assume that someone will not launch offensive penetration testing tools at your site, without notice or permission, just because they have reported the bug in the past.

He could have tested the bug without the pentest software, besides. Just because someone points out a crack in your window doesn't give them carte blanche to try breaking it after you said you fixed it.


The webserver did escort him into the room with the safe deposit boxes.

He has a key, they let him in, that's their job. The problem is that he could open his box, or any other box, without actually using the key.


We're laboring a physical analogy quite hard, here.

Again, the problem isn't that he found and disclosed a bug, the problem is that he attempted to exploit that bug after the fact.

You do not have the right to do that. Pure and simple.

Finding and disclosing a bug is one thing, utilizing it is something else entirely.


He was not 'exploiting' it. He was checking if it could be exploited, just performing a gentle tug.


Problem is he used an auditing/penetration testing tool POST disclosure, and did it without authorization. The availability of these tools puts weapon grade exploits in the hands of those with limited understanding of the consequences. I don't have an issue with the availablity -- best we lighten our history with Full Disclosure and provide best of breed tools to simulate attackers -- however, responsibility and individual accountability is at an all time low. These tools will light up the alarms immediately and the user will have limited understanding.

Let's assume it was not SQLi but an authorization application logic bug ie: by changing parameter passed by browser allowed access to whole record set. He did the right thing and told the vendor -- but after the fact he ran a tool that probably simulated SQLi on every damn parameter! Like smashing a car window after telling the owner he has left it unlocked.

Even a brain dead sysadmin would notice it In the logs, and likely whatever SIEM would fire a high priority alert.

He did this without auth and the company did the right thing here. In this post aaronsw world we can't just assume that every n00b clown whitehat hacker is totally innocent of all crimes even if done with the best intentions. People need to take responsibility for their actions. An ignorant click can be just as criminally negligent as stabbing a dude in the face.


What is with all these analogies that equate testing with smashing things.

Stop it.

Stop. It.


My name is Eduardo Gonzalo Agurto Catalan, I am an entrepreneur in the field of IT security and a digital rights activist. i would like tohave Ahmed Al-Khabaz's e-mail or other contact information in order to contact him and discuss how I and a few fellow experts could help him. We believe it is a great injstice and that the business community cannot stay passive towards this situation which we perceive as a kind of bullying. You can contact me : eduardogonzalo@hotmail.fr


Ahmed, I am assuming that you are following this discussion.

Based on the article, your life probably doesn't feel so good right now. Sorry to see a bright person in such a situation.

Give me a ring if you are looking for an internship, job or start-up experience in Montreal. We are in town (walking distance from Dawson actually). By the nature of our business, we also have good connections with academia if that can help (www.tandemlaunch.com).

My login is my name so you can reach me at [firstname].[lastname]@tandemlaunch.com


Maybe the answer is if you find a problem like that don't keep a secret between you and the person in charge.

Just go to the school paper or town paper and let them report it.

He did great up to the point where he tried to pen-test after reporting it. I understand the intellectual curiosity to see if people are doing their jobs and it's too easy to armchair quarterback but if you bring attention to yourself by reporting a problem you can be sure they will watch you and not necessarily the problem.


While I do not agree with the way this student was being treated, running Acunetix on a system is quite invasive. Regardless of his intent, the consequences might have been data loss and/or denial of service if the system was built poorly enough. Doing extensive vulnerability assessments without consent is really not a good idea.


I hope someone offers him an internship or job. It sounds like he may have a lot of raw talent.


Or at least attitude. Which transforms into talent anyway in the long run.


This is a perfect example of 'No good deed goes unpunished'.

The best action to take while you find a security flaw is to do nothing. Let some one evil abuse the flaw and make the guys miserable enough to realize the importance of a responsible disclosure.

Without this the guys ego is going to take this as- 'How dare he point a problem in my/our work' and not 'Thanks for saving my life before some body could screw me'.


"This type of software should never be used without prior permission of the system administrator, because it can cause a system to crash."

Remind me to never, ever use Omnivox, or any Skytech software, ever.


You probably won't have to, but as a student, you don't have the choices ;) Your courses information, schedule, homework, etc. is all on it.


In Australia, I'm happy to say all I need to do is report a data leak to the privacy commissioner and they'll basically investigate what's happening and force changes.


Reminds me of when I was a kiddo, I almost got expelled because I found a security issue in the schools network. I could access everyones files. They also didn't like it when I pointed out they were running cracked versions of Macromedia Flash on all their pc's. Let's just say, I'm glad I didn't get expelled. But I'm pretty sure they just saw me as an annoying fuck & that's all. I don't think they really cared, but were 'forced' to put time and effort making their network more secure.


Happened to me in 2000 in France. Same sort of stuff. Didn't kill my career. Just went elsewhere. I guess the French education system at least had this that it couldn't ban me nationwide :)


Clearly the negative reputation Dawson CEGEP has should be applied to the administration, and not the students.

What a clusterfuck. Since when do CEGEPs expel students for running security checks?


Maybe consider punishing the negligence of the person who wrote the insecure code instead? But I don't think most people, especially lawmakers, even understand that security vulnerabilities are caused by flawed code, which is caused by human error. So they tend to shoot the messenger instead.


I was in a similar situation in college. Was asked to sign a Non-Disclosure Agreement or get arrested. Told them to go to hell and file a lawsuit if they want too. Nothing happened eventually. Thank God for the excruciatingly painful justice system of India :P


it seems like there's more to this story, and the more to this story is around his actions two days after the report.

I've seen things like this happen before. You find a bug, you report it, they tell you "oh we're getting on it immediately". Some time goes by and you think, hey, did they fix it? You look, discover "nope", think "man I bet those guys would fix it if I lit a fire under their ass" and try and use the bug to deface the site, or something.

this is logic that makes sense to a 20 year old (speaking as a former 20 year old..). I've seen that happen before. the article doesn't say this, but perhaps reading between the lines the second attempt did not have a pure motivation behind it...


A fellow student and I discovered a similar flaw in my college's system a few years back, but not as serious as this (no social insurance numbers, but emails, full names, phone numbers and addresses).

We brought it to the attention of the head of the IT Department by email. Later that week, the head visited our morning class to discuss this with us.

He discussed the issue to the class and actually acknowledged his appreciation for students like us for reacting promptly and responsibly over the issue.


It doesn't come much as a surprise to me that Omnivox has at least a few security flaws. I had to use it during my CEGEP years in Montreal and it's a huge piece of garbage.


He's too good for college. He should just start his own IT security company.


I beleive Skytech should hire this bloke for a "and they lived happily ever after" story. It's essentially a win-win for Skytech.


After the way this was handled, I'd live in a cardboard box before I worked for this company. You can't have a healthy working environment without trust.

I'd give it a shot if they fired their president, but that's an unrealistic expectation.


Do you think there is a chance that the university over reacted without the company in loop?


The president of the company is the one who allegedly intimidated the student into signing a NDA by threatening to call the police and have him arrested. If that's how it happened, then it's irrelevant what the school did.


> The president of the company is the one who allegedly intimidated the student into signing a NDA

Missed that part - now it makes me think back on my suggestion. Probably, he should just look around on HN. :-)


Maybe, but I hope Ahmed is more ambitious than to work for a company he condemned for "sloppy coding"...


Ahmed could probably lead the charge to turn around "sloppiness". Besides, his career seems to be in doldrums currently. I don't think it would be a bad choice. FWIW, he might be able to use the same testing software on Skytech's products for which he has expelles at Dawson's. That would be a comeback of epic proportions!

On a serious note, can't he appeal to any education monistry outside college?


Apparently, he's been offerred a job from Skytech!

http://news.ycombinator.com/item?id=5090108


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: