Hacker Newsnew | comments | show | ask | jobs | submit | ArchD's comments login

What does this mean: "...nor do we believe the misissued certificates were used outside the limited scope of MCS Holdings’ test network,..."

If the certificates were used only in a test (private?) network, how did Google find out?

-----


I believe Chrome will report back when it finds an improperly issued certificate.

-----


https://scotthelme.co.uk/hpkp-http-public-key-pinning/

Chrome also preloads HPKP information for many sites.

It's interesting to note that certificates signed by a locally installed CA (e.g. an org's MitM proxy CA) will be considered acceptable. If MCS had just made their own private root CA and deployed it to their machines there wouldn't have been an issue.

-----


They weren't even trying to do that, is the saddest part. They were trying to be a real CA, but they weren't trying to MITM. (It's common practice for new / young CAs to chain their CA cert off a more established CA.)

They just, for some reason, decided that they wanted to store their cert on a Palo Alto Networks device that supported MITMing as a feature, and accidentally turned on MITM mode and plugged someone's laptop into it.

The intended use case of that feature on that device is in fact to hand it a private intermediate, not a publicly-trusted intermediate.

-----


"accidentally".

Playing loose and fast with a CA key and installing it on random devices like this speaks volumes about their understanding and respect of the world wide CA system.

-----


> It's interesting to note that certificates signed by a locally installed CA (e.g. an org's MitM proxy CA) will be considered acceptable. If MCS had just made their own private root CA and deployed it to their machines there wouldn't have been an issue.

I believe this is how Microsoft Family Safety works, as far as I know.

-----


This is how all corporate SSL inspection works.

-----


I give computer (coding) interviews instead of whiteboard interviews, and the candidates are allowed to freely use the compiler. This should be more natural than whiteboard interviews and closer to actual programming work. The candidate may tend to be less communicative than in a whiteboard interview, but I can still observe what they are typing and ask them questions.

-----


I would really appreciate it if you could describe some of your experiences in this in more detail. How many approximately have you given? How is the success rate? How many do you turn away, and why? Have you been able to match up key successes and disappointments with particular signals in the interview, and what were they?

-----


There wasn't any A/B testing since when I think about whiteboard vs computer, I already have the notion that if you can write code on a whiteboard, you can write it more easily and naturally on a computer. I've given 5-10 such interviews.

-----


Normal definitions of 'front-running' entails trading with non-public data. Anyone who describes HFT as front-running is not telling the truth and not to be taken seriously, because HFT generally has no more reason than slow trading to involve insider information. If your insider information is exclusive and rich, you don't even need HFT.

-----


Depends on the time window. It's not that hard to gain a few fractions of a second in front of large trades, but it's much harder to have enough time for a human to react, thus the need for HFT to exploit such trades.

-----


Could you explain the mechanics in detail? From my days trading, I dont remember any matching engines broadcasting but delaying a quote based on volume (or anything else).

-----


One of the primary strategies I used as a day trader about 10 years ago is what this article describes as front-running (it is not front-running, because front-running is also illegal). While the terminology may have changed since I was trading, the way it works is this.

When a large market order is placed with the exchange, the specialist (NYSE) would not have enough inventory to fill the entire order. What they would do is go up the book and start combining higher limit orders to fill the volume. He would also partially fill it with his own shares. This was how the specialist made a lot of their money (I imagine) because it allowed them to fill orders outside the market rate.

What we would do as day traders is as soon as we saw a large order like this occur, we'd start placing orders outside the market rate on the other side, hoping to get filled when the specialist combines the whole trade at one price. Once that happens, we can then sell that back to a market maker (at the market rate) or wait until the price moves back.

A few years after I stopped, the NYSE moved away from a specialist system (NASDAQ was never on one). I haven't followed the state of affairs since. I imagine the same thing still happens now, but only you have to be much much faster to play the game with the hybrid/computer market makers.

source: had my series 7 about 10 years ago

-----


As I understand it there are lot's of ways this happen. The simplest to understand is badpsed on there being multiple semi autonomous exchanges. If a trade is to large it ends up being sent to another exchange. If it's large enough it's going to move both exchanges. Doing the liquidity calculation to predict such trades takes less time than the trade takes on the fist exchange letting someone get there order in the second exchange before the first order propagates.

-----


If the information that enables "front-running" is available from public exchanges through standard channels, it's still public information, so the "front-running" isn't really front-running. Besides, those big orders could be sent as multiple IOC orders in order to not reveal one's hand, or executed as multiple small orders.

However, I'm not saying "spoofers" should be punished either. The SEC construing (divining?) the "intent" of a soulless automaton to me falls in the realm of not-even-wrong.

-----


Most of this gets really technical, however the idea that you can see an order and get a ahead of it is clearly front running even if it's legal. The issue is exchanges have an incentive to sell "early" access to information so generally people do X for a while, that becomes illegal or the people being taken advantage of swap to another approach.

-----


>however the idea that you can see an order and get a ahead of it is clearly front running even if it's legal

No one gets to see an order and get ahead of it.

> The issue is exchanges have an incentive to sell "early" access to information

All exchange access is "early" access. There is no way to stop latency advantages.

-----


No one gets to see an order and get ahead of it.

I just described one approach where than can happen.

All exchange access is "early" access. There is no way to stop latency advantages.

There is no need to publish pending transactions in such a way that you can get your executed on a different exchange before that one propagates.

Anyway, exchanges are an incredibly complex problem with a lot of perverse incentives. However (ed: IMO) any trading strategy based on implementation details is counter productive to a free and open market and thus it harms the U.S. economy.

-----


Pending transactions are never public. Everyone sees the execution on that exchange, but they don't know whether or not some additional portion is being routed to another exchange.

One situation that people often _mistake_ for this type of prior knowledge is the following: You are a market-maker, and at the current price, you want to buy X shares (you're also willing to sell at some slightly higher price, and willing to buy more at some lower price, but at the current bid, your appetite is X). Since the US equity market is very fragmented, the best way to make that happen is to place an order for X shares on all of the dozens of different venues, and hope one of them gets executed. When your order on one venue gets executed, you want to cancel your orders on all of the other venues quickly so that you don't get overfilled and accidently buy more than X. If you're a person who's trying to sell 2 * X or 3 * X shares, and the price moves down after you sell the first X, it may look like someone knew about the rest of your order and sold ahead of you, but it's actually just the market-maker trying to control their inventory.

-----


"Pending transactions are never public."

Pending transactions are should never be public.

(Bad actor 1): Gee, but if they where I could get 1 billion dollars. (Bad actor 2): no, We could get 1 billion dollars. (Bad actor 1): I like 1 billion dollars.

The point being it's often hard to steal 100's of Millions of dollars without someone noticing. But, front running like insider trading is one of the ways to do so.

-----


I don't quite understand this argument. I certainly agree that front-running of the kind you describe would make someone a lot of money. This is one of the reasons that being a NYSE specialist was hugely profitable back in the pre-electronic era. I'm making the statement that this sort of thing is in fact no longer possible on on any U.S equities exchange.

-----


I think I was talking past people. The original post I responded to said: If your insider information is exclusive and rich, you don't even need HFT

My comment was there is a range of vary short term insider information that you need HFT to benefit from. That does not mean that anyone does so, just that it's something that regulators should and probably are on the lookout for.

-----


> I just described one approach where than can happen.

And I'm telling you what you described can't happen. So either you don't understand what is happening or are not describing it well.

>There is no need to publish pending transactions in such a way that you can get your executed on a different exchange before that one propagates.

Luckily that does not happen.

> However any trading strategy based on implementation details is counter productive to a free and open market and thus it harms the U.S. economy.

This is demonstrably not true. If a trading strategy brings the prices between exchanges into rationality faster or cheaper than it could be done by a single exchange or by coordination between existing exchanges is very productive and useful to all market participants.

-----


I use the shebang "#!/bin/bash -e".

To get the same effect as the "set -o errexit -o nounset", I think you can use "#!/bin/bash -e -u". (There seems to be no option for pipefail.)

-----


The "shebang" treats everything after the binary as a single argument. It only does one argument from the shebang and the file itself as the final argument. So it would run that kinda like

    bash '-e -u' $file
But you can do

    #!/bin/bash -eu

-----


This breaks if your script is sourced by another shell. Best use 'set -eu' at the start instead.

-----


Why not just use smart pointers like unique_ptr and shared_ptr and call it a day? Dealing with raw pointers that require manual management is like running with scissors.

-----


I think you could define your own mutt package that doesn't have the optional stuff, although it may be a lot of work.

-----


I don't know much about the nix language, but it looks to me like the gpgme support is optional:

https://github.com/NixOS/nixpkgs/blob/master/pkgs/applicatio...

-----


The thing is easy to fix. The problem is that (i) most packages are far from minimal (ii) you loose pre-built binaries.

-----


True enough. I wonder how much of the problem comes from trying to be compatible with software packages designed for other systems.

-----


I never understand why people need to connect industrial plants to the Internet. Do they actually need to control them over the Internet instead of on-site?

And, if they need to use the Internet on-site, can't they make an air gap and segregate the computers that can access the Internet from computers that can access the plant machinery?

-----


I remember a rule a controls engineer once told me: never connect the plant to the Internet. Nothing clever, no humorous quip, no deep insight. Just don't do it. If you do need to get data out to the living world, and you will, then you carefully set up individual firewall rules just for the data system - which very much can not drive the process system.

This is where I begin ranting on the topic.

Why? Because plants aren't secure. They're meant to run all the time by people who may or may not know how to properly use a mouse. There will be passwords taped to monitors, systems that automatically log in to prevent start up delays, and authentication of the order that it-better-just-work-by-default. Under no circumstances should that damn system ever, ever be exposed to the outside world. Not by Ethernet, wireless, or flash drive. It's a young innocent facing the cruel, brutal internet; it's going to get hurt.

New plants are fancy and have highly trained workers with brilliant industrial IP wireless systems with state of the art VM servers and oh god did that guy just plug his phone charger into the damn wrapper HMI and now Windows media player has popped up and no one can acknowledge alarms (this partly why PCs tend to be in large metal boxes, that and dirt/water). No imagine that sort of silliness, but driven by the less savory from outside the intranet.

Just don't risk it. You'll never have a problem, and no one is ever going to care enough to hurt the plant... Except but for that one time when suddenly all the convenience and hubris won't bring back the machine that just slagged itself from some malicious command from outside.

-----


The terrifying part of this is that computers with Windows Media Player installed are running critical infrastructure. Shouldn't that be a stripped-down Linux machine with perfectly understood characteristics and close-to-zero attack surface?

Yes, your scenario is bad I can totally see it happening, but the problem is not that an employee plugged in his phone, it's that you are using a desktop OS for office workers for controls that really non-optionally need to always just work.

Maybe it's not a Windows Media Player launch, but what happens if Java shows up in the taskbar wanting an update, or your anti-virus software (yikes) wants to bug you about updating its definitions, or a modal dialog from the OS comes up? The fact that those are even things that can happen is pretty scary.

-----


>The terrifying part of this is that computers with Windows Media Player installed are running critical infrastructure. Shouldn't that be a stripped-down Linux machine with perfectly understood characteristics and close-to-zero attack surface?

Maybe I'm appealing to the wrong sentiment but most of these control softwares aren't available for Linux. Large CO2 generation plants, tissue plants, whatchamaycallit, it's just not available. So, the problem is with the vendors.

Another problem is that the people who decide to buy the software are completely clueless about what is secure or not. They may ask for 'advice' from their more tech savvy juniors but that is merely a formality to confirm their view point, as anything contrary to their already decided viewpoint is quickly discarded with the mental rationalization of 'He probably doesn't have my experience' or 'I know better because I'm senior and I've been around these longer than him.'

-----


Still begs to reason why wouldn't they provide a stripped down industrial Windows for these cases, rather than just a layer of Enterprise apps on top of the regular thing. Even their "Enterprise for Manufacturing" page is all about apps and Metro tablets.

-----


They do, it's called Windows Embedded, and OEMs customise it to include only the components they need.

-----


> that is merely a formality to confirm their view point

Is there a term for this? I often give out to people when I see they're just going to discard my advice if it's not what they have already decided.

-----


Confirmation bias.

-----


Windows has overwhelming market share in the process control industry. Microsoft has long standing partnerships with the majority of the process control vendors. The attach surface argument was never relevant when networks were physically isolated. There is a slow shift towards Linux however many systems have extremely long lifespans.

-----


>The attach surface argument was never relevant when networks were physically isolated.

If the network is designed according to this philosophy, then it will be trivial for an insider to breach the airgap. That could be someone who hates his boss, someone who's about to be fired, somebody getting paid by a competitor, somebody getting paid by a criminal enterprise planning on shorting the stock, somebody coerced or coopted by a state actor.

If the process control network is soft and chewy for anyone who can put his finger on an ethernet or USB port, you are still far from secure - as Iran learned, by the way.

Windows Embedded is relatively sane, but that's not going to have Java and Windows Media Player and antivirus software hanging out, and it's (in part) designed to let you whittle its size and attack surface down to exactly what you need. But vanilla Windows having marketshare is just baffling to me.

-----


Seems to me, an insider wouldn't need to "breach the air gap". Quite literally they could just walk over to the controls.

So defending against the disgruntled employee, or impostor employee, armed invading non-employees,...that should be the problem realm for onsite security and management, not software designers.

But yes, you're right. That is baffling. People are fcking terrible with computers, and for most of the roles they shouldn't have to be more competent. The controls should be about as flexible as an atm machines user interface.

-----


>Quite literally they could just walk over to the controls.

Control systems may not be designed for IT security, but they are designed for safety. You would expect:

- Limits that prevent an operator from pushing a parameter to an obviously insane value

- Alarms that sound audibly and visibly on other control panels, in a control room, etc. when a situation is heading out of control or is actively dangerous

- Automated failsafes that take action to correct dangerous situations

- Audit trails that indicate what buttons were pushed, possibly by whom

- Logical access control so that i.e. line workers cannot change configuration, damaged equipment can be immobilized, a particularly sensitive operation enforces a 2-man rule, etc.

- When an employee is fired (or goes home for the night), he can no longer influence the plant in any way.

All of these would make sabotage by walking up to the controls difficult - at the very least, someone else would know about it in time to evacuate, and at best, the system would automatically correct itself while locking you out and sounding an alarm at your supervisor's desk.

If I've pwned the control system, then I can push parameters beyond the engineers' limits while MITMing and falsifying reports from sensors so that everything appears to be normal, no failsafes kick in, and no alarms go off until everybody is dead. Forensic examination of the audit log would not show me doing anything strange.

If it's my last day and I've plugged a tiny, GSM-enabled, PoE attack platform into an ethernet port, the the fact that security has taken my badge won't stop me - I can do all this from home.

-----


Not all of these things can be solved by a control system alone, at least not without a ton of investment in RFID and other auto-id infrastructure. Some human is still going to have to administrate your system, and he or she needs to be educated and trained, and they need to value security.

In the article's case, for example, they made it sound like the "hacker" basically conned someone into giving him access to the remote management interface. The only way you can fix a problem like that in software is to make the interface totally inaccessible.

-----


In a lot of shops, some old crusty box is used until it fails. It could be running xp or win2k.

At least that was how an ivy-league's student housing maintance shop and also a nuclear engineering service shop were run.

-----


They do not see themselves as targets. Their systems are likely bespoke, or at the very least obscure. And, more importantly, they have bigger problems to worry about, like operating their businesses.

As it becomes more clear that yes, someone will go to that trouble and it will have catastrophic consequences, you would hope that these things would get better.

More likely, someone will pay a "security consulting" company $100 million to run a Nessus scan and tell them to turn on Automatic Updates on their Windows infrastructure.

-----


You don't pay them the 100mil to run the scan. You pay them the big bucks to park themselves at the top of the lawsuit list when things go wrong. Its more like insurance at that level than technology.

-----


A better, more serious answer: always have two networks. Preferably physically separated (though I hear virtual networks are pretty ok with the right router equipment). The machines holler on one, and the administrative support on the other. I've seen what happens when it's even just tried to put both on the same layer, and it's inevitably some form of minor disaster. Not just because of security, mind, but because you really don't want your file transfer to a network drive to even slightly lag a sensor yelling back to the PLC about an interlock's state.

If you need to get on the process network, use a VPN, and only open to a machine that can't actually run equipment. A programming terminal may be made available to save costs so an integrator doesn't need to fly in for every support call, but these access points tend to require a VPN through at least two firewalls. (And even then, often you would still insist on them coming in person, for all manner of other reasons.)

-----


See also: https://www.tofinosecurity.com/blog/why-vlan-security-isnt-s... (and its comments, which echo different points)

-----


OK, granted they may want to monitor the plant remotely. Then they could have a plant-connected machine dump UDP monitoring packets to an Internet-connected machine, and have the plant-connected machine block all incoming packets from the Internet-connected machine.

-----


It seems that there are many ways this could be done right (and does not seem a particularly hard challenge), it's just that people in charge probably were pretty much inept at that task.

You know how it is, no one cared about it until it happened. It just wasn't a priority.

-----


I get the impression from dealing with german companies that they tend to be very good at traditional "engineering" but when it comes to it/computers they are 10 or 15 years behind.

I also think that in germany its considred that the good engineers and asociate profesionalas want go and work for firms like Audi.

-----


It's not just Germans. Anyone that isn't primarily in software has this phenomenon. Mobile phone makers, for example, are a disaster, and it's only having a whip wielded by a software company with some power over them that prevents it becoming a complete train wreck.

In my experience the most dangerous are engineers in other domains that learned just enough programming to get the job done but can't understand the giant holes they've created and not run into.

-----


Testify (Brother or Sister) having worked for a big telco we regarded the mobile side as grade inflated "amatuers"

I still recall one of my colegues (working on the core IP network) being amused that one UK mobile provider was still using NT4 in their core network.

Ill be nice and not write what we thought about the US cariers

-----


Well use private curcuits or go old school use modems with dialback ie you call the modem it disconects and calls you back on a hard coded number

-----


It is very unlikely that the process control network is connected to the Internet. However it is almost certainly connected to the corporate Intranet. Think about all of the metric data available on the process control network - that is needed by engineers for analysis, ERP systems for financials, asset management systems for maintenance etc. With an air gap, you can't do any of that in real-time.

-----


The solution is to run a historian in the DMZ, only the historian can read data from the DCS, and the corporate systems (ERP, BI etc) read data from the historian. And nothing from outside can update the DCS.

-----


The article mentions industrial applications being the first to use this. What are some example industrial applications where you need a high-bandwidth, low-latency, link that has to be wireless, that WiFi is not good enough?

-----


A lot of industrial processes generate immense interference - electroplating and arc welding spring to mind. I guess those would be a logical starting point.

-----


True, but arc welding will also create enormous interference in the visible spectrum.

-----


That's easily shielded against by use of an aperture, where as RF interference is not nearly as easily shielded against.

-----


How about medical instrumentation/telemetry? In the ER, ICU and other bedside locations ability to communicate high bandwidth data is very valuable. Wireless transmission allows greater flexibility choosing devices to measure various parameters and freedom to set them up optimally around the patient.

Another idea is having a completely sealed Li-Fi transmitter in an isolation chamber send experimental data through a transparent port for analysis. (My guess is Wi-Fi would be harder to package for the purpose.) I'm sure there are many other situations where high-speed short-distance transmission would be advantageous...

-----


Have you heard of/seen Kiva robots? If the floor could just blink to the robot, that would be pretty cool! Especially if Li-Fi is cheaper / higher throughput.

https://www.youtube.com/watch?v=Fdd6sQ8Cbe0#t=13

-----


I don't know why people are talking as if deflation is a real thing when there's a worldwide property bubble going on and property price is a a very real aspect of the cost of day-to-day living. One must question the CPI metrics used.

-----


Follow the money. Who benefits from talking up deflation and who loses. This will answer your question.

-----


It's a fraudulent concern meant to enable more printing.

One of the many psychological toys the Fed & Co. use, just like they regularly threaten to raise interest rates (for years at this point) to buy more time on holding down the bubbles they've created without having to actually do anything.

The reason so many people are afraid of deflation, is they're from the Keynesian school of economics. They've been brainwashed for two generations to think inflation is how you grow an economy. No coincidence, the Keynesian experiment has been a global disaster of epic proportions, leading to the greatest accumulation of debt in world history; and locally, a 40 year stagnation in the American standard of living, perpetually high real unemployment, increased poverty, and increased inequality (because the rich can shield themselves from inflation, the poor cannot).

Every country in history that has ever attempted to implement a Keynesian inflation based economy, has failed, with the result being a disaster. Such examples include the US, Japan, much of Europe and lately China has signed on to the debt / stimulus / inflation party. Japan is a famous, fake deflation example. They haven't suffered a penny of deflation in 30 years (an inflation based asset bubble imploding, is not deflation); if Japan had suffered decades of deflation, their wages and prices wouldn't be among the highest in the world.

-----


Yup. Total disasters. Higher standard of living than any humans in history, but Japan, Europe and the USA are somehow failed economies and total disaster zones.

What colour is the sky on your planet?

-----


I think the poster above is talking about the trends, not the absolute level of wealth. The standard of living in the US has been stagnant (even declining, for the lower-middle class) since 2000. Japan has been stagnant since the real-estate bubble burst in the 80s.

-----


I seems that some copyright claimants spam website operators with bogus takedown notices. What the DMCA takedown procedure lacks is a provision allowing website operators to just ignore takedown notices from claimants with a spotty track record of bogus claims. The standard for 'bogus' could be something as simple as a hit-rate threshold (fraction of claims in past x days that are not shown to be bogus).

-----


A reputation system would be trivial to circumvent, though, through a shell entity. an easy way I'd imagine to limit bogus claims would be to make it so that a bogus claim has a negative effect, in the small. Normal law has this pretty well figured out: you win, you get reimbursed, you lose, pay up.

-----


The problem with this is that when EMI or some other record giant is the one being abusive and submitting bogus claims, Youtube/SC/et all will not start ignoring notices because they will be afraid that the label will pull their catalog.

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: