Am I crazy or is this not blatant insider trading?
This will be forever the legacy of Eric Holder, the man who changed the justice department policy to go after smaller 'fines' as settlements instead of prosecuting crimes.. only because of the simple fact that fines are easy to win, and criminal cases can be lost.
Justice is now escapable because it's been deemed "too difficult to pursue"
This policy change could also perhaps be attributed to lobbyists seeking to maximize profits and minimize risks for corporate clients who are knowingly breaking the law.
> Prosecution rates against crimes by large financial institutions are at 20-year lows. Holder has also endorsed the notion that prosecutors, when deciding to pursue white-collar crimes, should give special consideration to "collateral consequences" of bringing charges against large corporate institutions, as outlined in a 1999 memorandum by Holder. Nearly a decade later Holder, as head of the Department of Justice, put this into practice and has demonstrated the weight "collateral consequences" has by repeatedly sought and reached deferred prosecution and non-prosecution agreements and settlements with large financial institutions such as J.P. Morgan Chase, HSBC, Countrywide Mortgage, Wells Fargo, Goldman Sachs, and others where the institution pays a fine or penalty but faces no criminal charges and admits no wrongdoing. Whereas in the previous decade the Bush administration's Department of Justice often sought criminal charges against individuals of large institutions regardless of "collateral consequences" such as cases involving Enron, Adelphia Communications Corporation, Tyco International, and others.
Also I get that it's Wikipedia, but the comparison to Bush is ridiculous. The Enron story is a long one. Here's Gray Davis's perspective on exactly what it means that the Bush DoJ "sought criminal charges:"
> "I inherited the energy deregulation scheme which put us all at the mercy of the big energy producers. We got no help from the Federal government. In fact, when I was fighting Enron and the other energy companies, these same companies were sitting down with Vice President Cheney to draft a national energy strategy."
Furthermore the whole reason Holder is using the term "collateral consequences" instead of "collateral damage" is because that term was used by Dick Cheney and the press to describe why he does not care about civilian deaths in Afghanistan and later Iraq. It would be a stretch, but surely the lack of investigations into no-bid war contracts like Halliburton's had as much to do with their "collateral consequences" on military operations as it would on their paychecks.
Obama inherited multiple quagmires due to disregard of the consequences of criminal justice policy.
Normal people will never be on the wrong side of a banking fraud, except if their bank goes out of business. What is justice there?
So while I believe bad people should go to jail, I sympathize with Holder's point of view.
We should be pointing the finger at the people who knew all these things and still put him in charge of the justice department. We should also be pointing the finger at the ones who have the power to change these policies now, but fail to do so.
At this point Holder is just a scapegoat.
Meanwhile, I'm some kind of "radical" for even pointing this out...
Only for entities which are wealthy enough to effectively defend themselves against the awesome power of federal prosecution!
Only if you're rich enough.
p or true => true
> These sales were not made pursuant to a Rule 10b5–1 trading plan.
I'm not an expert in stock trading, but this logic seems very plausible. A Plan does not preclude other arrangements to sell shares
Maybe such a thing does indeed require amending The Plan, but I haven't seen anyone with expertise chime in. I'm just saying that logically, "pre-arranged" does not necessitate "working within Rule 10b5-1"
So if it was pre-arranged but was not following the rules, it doesn't matter that it was pre-arranged, it's still counts as illegal insider trading as-if not pre-arranged.
Reasons being super obvious, since it would be easy to do insider trading in a stealthy way otherwise.
You don't have to be an expert. Practically everyone who isn't rank-and-file gets the dossiers on this nonsense in a public company.
I'm not aware of the details, as I'm just a peon in Back Office, but I do know traders pay attention to not-insider "insider" trading announcements from the SEC (yes, this pretrade information is publicly available from the SEC). I have no idea about non-US rules.
On a serious note: there should be a mandated, periodic, third-party security audit by neutral parties for all entities which deal with user data beyond a certain specified level of sensitivity. It should not be left to their discretion when to run such an audit from their end. Whether an entity similar to SEC for the stock exchange is desirable can be debated, but the current laissez faire approach to data will lead to even more such disasters.
The reverse now is companies are taking the blame and legal liability for being negligent in their security practices.
That said, no organization public or private is impervious. Heartbleed and meltdown should have driven that home to anyone who thinks otherwise. Determining where the line of negligence lays will be a harder one to draw, though for civil liability it may not even matter (which is great for Google, Apple, and death for small businesses.)
Additionally, Equifax then paid on three separate occasions for external security audits at the direction of upper management (one specifically directed by Smith where the auditors were specifically directed to report the results only to Smith).
I haven't seen enough information to give a positive answer, but only to make conjecture that the external audits were driven by outside pressures; likely coming from the SEC. So the audits were performed, but this was essentially just a formality and management largely disregarded the audit findings.
Standards set by buerocrats are usually written by special interest groups and don't achieve the desired outcome at a good cost.
In one case, I saw a "private cloud" provider underwrite their client's system by owning the "software-development-release-cycle". They were mandating quarterly releases and three-month manual testing and regression periods.
They put themselves in a situation whereby they could charge the client for the tin, administering the process, the time and materials for the deployment and testing and the indemnity premium.
They reduce their risk/exposure because of infrequent releases and such long regression cycles meant assurance levels were rarely met within the dedicated window. There was a very long tail of unreleased features. In summary, they mitigated any risk by chocking the product, reducing the number of releases and the size of them to deliver a fraction of the value available.
We learned to work around by taking advantage of feature switching, but quarterly releases are a death knell for a product.
Just about any managerial incompetence can be spun as "securities fraud" since the basic presumption of most companies is that management is competent. Maybe they'll give up on that in order to reduce their exposure to lawsuits.
With control, some administrative body says you have to do X, Y and Z. And presumably, if you jump through the hoops and it blows up, there is an implicit guarantee. This kind of regulation is common across banks, and in 2008 when all the reserve requirements were deemed insufficient and all the acceptable ratings meaningless, there was a bailout.
The other alternative is a liability approach. You are liable for something (e.g. protection of customer information) and you are responsible for execution in the best way to know possible. If you fail, there is some punitive measure taken.
I personally prefer the second, especially since security is a hard problem. There are best practices, sure, but from my experience, I don't believe regulators and auditors are effective in their stated goals.
In a regulatory environment, if a corporation does not comply, the punishment is increased until either the corporation complies or seizes to exist. Since not existing is bad for profit, corporations usually comply eventually.
In a liablatory environment, as long as nothing happens, a corporation can continue to proceed down a dark path with no ill effects and abuse the rules as they see fit. It's only costly when things go wrong and you can calculate the likely cost of things going wrong.
In a liability environment, you attempt to describe the true cost and risk and allow the company to adapt to changing environments.
A practical real-world difference is "password rotation requirements". Most real-world security professionals knew the dangers of strict password rotation requirements for years before NIST could release information on it. And just because the process of standardization must necessarily be slow, the NIST requirements would then have to flow to other departments so they can then update their standards and so on.
Today, in most financial and healthcare companies, password rotation standards are rampant despite it being the case that NIST has advised against. This is often because the companies don't want to spend the money to alter policies but also often because in the 'no-man-is-an-island' interconnectedness of firms, policies in one can impose corresponding policies in others. So that means that your new startup will need to rotate passwords every month in order to integrate with (say) Bank of America.
Apart from all that, if you genuinely think about it, we want to do cost-benefit analyses on this. If the risk of leakage of customer data is low enough and we value it at some $x dollars per unit, then there's some $y of cost above which it isn't worth it. This intuitively makes sense since customer data, no matter how personal, isn't worth infinity. If it were, no one would collect it. No one. In fact, by giving me your phone number I would suddenly be holding an artefact of infinite value. Or by giving Amazon your shipping address. No one wants that liability and information exchange would halt despite everyone (in reality) wanting it to happen.
What I hoped for is that I, as a startup, can beat the government on picking a security standard because the government has to cater to all but I can just beat it by being better than Big Slow Dinosaur (BSD). But because I have to integrate at some level with BSD that has to follow the government I lose that advantage.
I suppose, with a regulator capable of moving fast to respond to threats, aware of sunset periods, regulation could be superior. Somehow, I expect forums like this one to be full of software engineers complaining about "the constantly changing requirements from NIST" if that were to happen.
A Liability regime would encourage companies to be actually secure, because they're responsible for what happens to data that is lost. A Control regime would encourage companies to check boxes from a list provided by a regulator. A Liability regime encourages being pro-active vs reactive to the regulator in a Control regime.
There are middle grounds, like HIPAA or GDPR. Both give companies some leeway in terms of creating their own checkboxes, and fines are for actual breaches, not just improper process.
Challenge 2 is actually getting any legislator / regulator (at least in the United States in the current climate) to agree that this is an important and urgent regulatory matter worth adding the burden to companies', and that they should move on this now to improve overall national security.
Challenge 3 is to only make the request once, in a standard format so that the data is actually relevant instead of overlapping requests from different organizations that turn useful data into a paperwork drill that is irrelevant by the time it leaves the org.
Lastly, I'd say some sort of open source middleware proof of concept to exchange this information would go a long way toward accountability. Industry could even propose the best option themselves via their existing interest groups.
They're all merging again post-Arthur-Anderson scandal with their services consulting businesses and while there may be controls and training and they're all pretty serious about it (I worked for such a company at one point) it doesn't seem like many folks are getting dinged on violations of late.
I agree that it is worth the cost (as a citizen whose data is being lost) but in the current landscape the companies may argue that it is not, and might be right.
See: Boeing, iso certifications, building inspectors, health inspectors, any large civil engineering or aero firm, etc.
To be clear, I do agree that it is absolutely needed in the US. I just have no idea how you could implement it. Culturally the US seems to think that asking forgiveness and looser regulation for businesses is the right direction.
I was shocked (shocked!) to learn that the "municipal" inspector of works was a private individual, who was paid directly by the building company. Not by me - by the company that was supposedly being monitored. I didn't even have his name and address.
[Edit: I am in the UK]
Edit: Of course it would be irresponsible to say that they were consistent. Each inspector has their own ideas of "that should really be 2x12, not 2x10", or "that stairway is too steep", or "that should look more like the other houses", etc. But I do see value in forcing everyone facing a semi consistent set of rules.
A critical data breach doesn't threaten its bottom line. Unless someone uses such a breach to turbo credentials into one of the credit-querying institutions and reveals, for example, the detailed criteria by which such an institution grants a loan.
- What's the password
- Yes, the password!
>Likewise, Equifax “protected” one of its portals used to manage credit disputes with the username ‘admin’ and password ‘admin.’ This portal allowed access to a vast cache of personal information, including employee names, emails, usernames, passwords, consumer complaint records, and the Argentinian equivalent of Social Security numbers. The portal also granted administrative access allowing intruders to add, delete, or modify records. A November 15, 2017 article in Forbes quoted cybersecurity expert Wes Moehlenbruck, who stated that this was one of many “very grossly negligent security practices” at Equifax. The article continued, “‘Admin/admin’ as a database password is a surefire way to get hacked almost instantly,’ Moehlenbruck says. ‘A production database with this account smells of poor security policy and a lack of due diligence.’
Seems to agree with that GP was saying.
Colleague #1: "What password shall we set?"
Colleague #2: "Just leave it default for now as we're still testing, we will change it later".
Colleague #2: "It's really past due time to change the database password, but first we have to make sure all critical systems can still access the database."
I know I'm stating the obvious, but I've seen some worrying attitudes of "just in time" that seem to go hand in hand with a misunderstanding of Scrum Sprints or Kanban. Where people concentrate on the tree and ignore the vast interconnected forest around them.
To keep high security at all times you need:
1) Process aka bureaucracy. Mandatory checklists. Checklists are returned and inspected by others. Anything missing or uncertain is checked again and fixed.
2) People who are responsible for security are independent from other concerns. They can have adversarial relationship with people responsible for getting things done if there is conflict of interest. People responsible for security must have status and power to enforce it.
Consider a scenario where you need to take the system down and fix something quickly. It's completely reasonable to allow dummy password few hours when people are around fixing the problem until the system is back online.
But if there is no process in place to remove security temporarily and then restore it something is always forgotten. People who would order password to be changed is not using it and forgets the whole thing. People who use it don't say anything and it becomes new normal.
You need to mandate checklists. You force people to use them and return them. It's costly and makes things slower.
Dealing with standard practices, most especially violations of your #2, have contributed in large part to my getting off this ride.
If an attacker is able to reach your DB the ballgame at 90% of the way over already. Yes I understand that a strong U/P on the DB server would be 1 final gate but unless I'm living in some alternative reality I can tell you plenty of companies use weak/shared/guessable passwords for stuff that shouldn't be reachable from the outside like this. And honestly? Securing the DB with 1 extra (potentially useless) line of defense is an extremely low priority for most businesses.
Sure they might eventually break those too, but it's time and effort and opportunity to be caught.
Assuming patched, modern database software, that should keep attackers at bay until all the failed login attempts are spotted. It's very embarrassing that they (allegedly) did not do this. Of course, note that in this case it's a custom web application so it would probably be at least more of a challenge to compromise.
I can't imagine a secure database password would have prevented the hack.
Though the lack of a secure database password probably hints at Equifax's attitude toward security in general.
The document does not state any further detail than this, so it is a bit unclear as to what exactly was located on this portal - does not seem like it was their main database anyhow.
Still incredibly incompetent though.
It took maybe 5 minutes at Experian and Transunion. Equifax's site 500'd when I attempted to set it up, and they had no way to take a report. When I called them on the phone, they suggested I send them a fax, and their customer service rep suggested I was a useless person who would never go anywhere in life when I said that was unacceptable and they should be out of business.
They should go out of business.
Nobody thought to raise that? to anyone?
Although I can understand. I have several people who now call themselves DevOps on a project who have practically zero experience with systems operations _or_ development, and have done some utterly incomprehensibly stupid things. It doesn't matter how fancy your cloud tech is, if someone creates VPCs with default ALLOW ALL rules, stuff is going to get compromised. Worse yet, some are _fighting_ against changing the ingress rules because that would show that they were wrong! I'd at the very least rotate them out and replace them if I could. (rant over)
IMO, the first step to fixing the problem is give DevOps the proper amount of time to design the required permissions. It sounds easy from the outside, but again IAM can be very complex.
Additionally, DevOps must think security first. That means a newly deployed service has zero access and goes from there. Developers are going to be annoyed, but DevOps needs work with them and vice versa.
I'm seeing a lot more inexperienced people getting access to stage/production systems (i.e. internet-facing to a greater or lesser extent) due to the DevOps paradigm. Of course the role sounds cool so people advertise for it, and apply for it, but there's a serious lack of understanding of just what it is! Developers need a good understanding of Operations, and Operations Admins need a good understanding of Development.
Things like not understanding the reason why you'd want to test the network access and DNS lookup from the stage pods instead of their local machine. Or not knowing how to perform basic source control tasks.
Of course, I can be dismissed as a grumpy old man. I am, I'm in my 40s and 23ish years of Linux operations has, I hope, taught me a couple of lessons. But I'm not yelling at the kids to get off my lawn, I want to teach the kids about correct garden maintenance, weeding, and when to plant bulbs and seeds (to stretch a metaphor way too far!). I find people are resistant to learning basics "because the cloud", or putting in due diligence because they're paid too little (which I fully understand!)
Sorry, grumpy old sysadmin who is now a team lead with lots of responsibility and too little time to brain dump his 20+ years into some younger heads. I'll try to lighten up :)
Inexcusable. Change the login/password to anything more secure, even if it's temporary.
And yet, when things like this happen, people want to blame the CEO. Sure, the buck stops there and that person is really responsible for everything. But should the executives really be concerning themselves with the database password? It's an utterly irresponsible thing, and those actually working on the product should have known better.
Unpopular opinion around here, I know....
Another variation is not even knowing default accounts exist. i.e. where there is a CLI command to add a new user, which was done during install.
This obviously isn't always the case - but it happens a lot.
You'd be surprised how much "inertia" there is in other sectors, some stuff keeps happening even when alternatives are not just better, but also cheaper.
There is an unbelievable amount of sensitive data, whether corporate or personal, unencrypted on network shared drives and laptops across corporate America.
I'm pretty much a lay person when it comes to security, so I don't know generally how safe or unsafe that is. But there is definitely a sense that, as long as you don't get phished, everything on-prem is basically "secure" and IT is just taking care of it.
For example my employer had strict rules about data that can be stored on a cloud service, but less-strict rules about data that can be stored on an on-prem network drive.
TransUnion and Experian should be enough. I go through my reports and all three are pretty much the same.
Though competition without consequences clearly isn't.
I'd like to see a few more bureaux. Or the role nationalised. Government is at least in theory answerable to the citizens. Though government as financial vetter introduces numerous other issues.
The question of why people require credit for day-to-day financial activities, many carrying balances, is another part of this question. Sufficient pay, collective bargaining, workplace and tenant / homeowner protections, and wealth and land taxes are a few policy changes outside the data security arena which would help markedly.
Credit rating isn't exactly a free market, but I'm very skeptical that we should reduce any significant oligopoly from three corporations to two.
The other part is if I'm applying for any kind of credit, I assume the most conservative lender would look at all three results and just go with the lowest credit score.
But I agree, even in a semi free market, competition is good.
The password for that console was sesame. Transactions were testworded but otherwise sent in plain text. When I worked in Europe a few years later I constructed by own telex bankwire transaction from a hotel in Italy. It was to my account for my money but it worked, no questions asked.
If password auth is a speed bump. And once the physical/network barriers are breached, it would just be a matter of time.
If I remember correctly, they immediately stopped the testing and reported it to management, but I believe they never heard back as to whether the problem was fixed. If anyone knows more details about this, I am sure we would all appreciate an update.
Also: "1..2..3..4..5 That's the kind of password an idiot puts on their luggage!" --Spaceballs
At least that was kinda my takeaway from those jobs. I could just have a skewed version based on the stuff I worked on.
Now a pentester? If they don't spot this during an assessment, they suck. But pentesting isn't always performed on a rigorous schedule.
* Communicated clearly so that there is a path to compliance.
* Enforced with penalties so that people within non-infinite-budget organisations can sell the change as a means to cut costs.
 If you only do the 1st, then it is like forcing children to swim by throwing them off a boat. Some people think that "sink or swim" forces someone to swim; it doesn't. It just presents two possibilities: swim or die.
I can completely imagine a headline like this when there is an old basic auth overlaying an application with a real password. It just seems unlikely that all the logging in the customer service portal will say, "updated by admin".
I imagine developing something like equifax today, you'd want to hook up it up to your SSO, and used row based security so a user can only read their row, and then you focus your efforts making sure user accounts, especially privileged ones such as staff, aren't being abused. (You'd still probably establish system to system level trust, such as keys between your API and DB).
But it's so much easier and cheaper just to connect using a username and password, and then do whatever the framework you chose does by default.
Dismiss as in the same penalty Equifax had to pay per breached user.
Ah damn, I gotta change the router password now?