The A-10 is extremely vulnerable to man-portable air defense systems (MANPADs), which are difficult to spot and nearly impossible to avoid at the low altitudes within which the A-10 best performs. MANPADs are more and more common on the modern battlefield, and the supply exploded after the 2011 Libyan Civil War. Furthermore, the GAU-8, while an impressive weapon, doesn't hold its own against any tank built within the last twenty years. This leaves the A-10 as a slow, unstealthy delivery platform for AGM-65 Maverick missiles and JDAMs, a role better performed by higher-performance aircraft.
I say all that as a huge fan of the A-10c, with many hours spent in the DCS sim. People seem to get bizarrely hyperbolic over the capabilities of this plane, but it's really a dinosaur.
Thanks for this. I don't see why people think 30 mm slugs are effective against tanks designed to resist 120 mm penetrators or shaped charges, including top armor hits. And simply enduring battle damage is insane against modern MANPADS with large warheads. The A-10, along with the B-17, is one of the most over-romanticized aircraft of all time.
You're neglecting one of the most valuable roles for the A-10: close air support for ground troops in contact.
In that situation (read: the entirety of our conflict in Afghanistan/Iraq) tanks had NO ROLE. You're fighting a highly mobile ground force that often have an armament/position advantage and that's where the A-10 CAS really really makes a difference. Nobody fought TANKS in Afghanistan.
Ask anyone who's been in contact with the enemy how they feel when they hear that distinctive "BRRRRRRRAAAAAP!"
My father worked on A-10s in the Air Guard, I thought was very familiar with them... but it wasn't really until I saw that video that I realized how terrifying they are in the appropriate situation. While the above video is a friendly-fire close call, you're probably not going to find video like that from the other side of the line.
Yeah, apparently the problem with using high speed jets if they can't hang out close to look out the window and figure which people are the bad guys. Helicopters are too lightly armoured and get shot down and current drones are a bit lightly armed to hold off loads of enemy troops. Maybe in the future when the drones are more like http://youtu.be/zjympX1bxI4?t=18s retiring the A10 would make sense
The problem isn't the drone airframe. It's taking the person OUT of the airframe.
Having that pilot in that seat removes the need to replicate the kind of situational awareness you have when you're flying your plane 100-1000 feet off the ground and making passes to identify who is who so you avoid a friendly fire incident. Until we have the kind of immersive, low-latency, realtime kind of virtual reality you'd need, using drones for CAS is a pipe-dream.
When the Air Force first wanted to retire the A-10, there was an article  proposal: Give the A-10s to the Marine Corps.
Even if the machine could not be retrofitted for carrier operations, the Corps could fly them from airfields ashore, and would really value an excellent ground-attack aircraft, and would take good extra-special care of them.
What about the depleted uranium bullits the A-10 uses? I read they are are threat to friendly ground forces when they cross the fields the A-10 fired up on? Targets hit by the A-10 basically become biohazard... http://en.wikipedia.org/wiki/Depleted_uranium
Yes, the A-10 is outdated and couldn't tear through the armor of a modern tank like it could to the tanks of Yore. But you also need to remember that the chain breaks at its weakest link. A strafing run from the GAU-8 will absolutely shred the tracks, gun, and exterior sensors of anything it comes near. Sure the crew might live through the concussions and blunt force trauma, but that vehicle is no longer in the fight.
Yep, the A-10c can take a real beating, but there's a huge gap between combat readiness and limping back to base for expensive repairs, assuming you survive the initial SAM strike. Much better to never be hit (or even seen!) in the first place.
Most assmetric warfare in current theatres is not fought against tanks of the caliber of the modern western main battle tanks. Until these types of conflicts are unlikely, I don't see why you would prefer an f-35 in CAS vs a comination of A10s and (if really a problem) drones or F16s.
>I don't see why people think 30 mm slugs are effective against tanks designed to resist 120 mm penetrators or shaped charges, including top armor hits.
Because tanks only have that sort of armour on their frontal aspect and turret sides, and the top armour is very thin. Even if the turret top is armoured enough it is likely the engine deck can be penetrated and produce a mobility kill.
Also, hardly anyone besides the US and extremely close allies has ultra-modern tanks like late series Abrams or the Challenger II (and I have seen no evidence that those tanks are immune to the GAU-8).
P.S. There are also cheap ways to reduce shape charge effectiveness that don't work against solid penetrators. Things like the TUSK upgrade are designed to defeat RPGs not 30mm cannon.
it's not just the caliber of the ammo, it's also the velocity. I'm certainly not an expert, but I did my military service in a base used for tank training. Back then, it was already a done deal : no armor could resist the amount of energy released by anti-tank ordnance, in particular kinetic penetrators. I've seen up close old tanks which were used for training. On the entry side of the impact you have a perfectly cut circular hole a few cm in diameter, with slits all around like petal flowers. On the exit side you have a big open tear. In between, there was a jet stream of molten metal. Bottom line : a spotted tank is a dead tank.
That was a fairly long time ago, but unless armor has made huge progress since then, producing something hard enough to deflect this kind of force and yet light enough that the tank can move, I'd still bet on an A10 against anything rolling on the ground.
Do you know anything about changes in tactics related to MANPAD threats, then? I read some about the Soviet war in Afghanistan, where we supplied MANPADs to the rebels for use against Soviet helicopters. Reportedly, they didn't actually shoot down very many helicopters, but they were very effective in that the threat of them forced the helicopters to engage from much further away from the battle and move at higher speeds to limit their vulnerability, but also limiting their effectiveness.
I wonder if there has been an affect like that on the A10s. And I wonder if anybody who knows is actually allowed to tell us.
The A-10 is extremely vulnerable to man-portable air defense systems (MANPADs), which are difficult to spot and nearly impossible to avoid at the low altitudes within which the A-10 best performs.
How about building a ground support platform that can deal with this? Apparently, the Army has tried with helicopters with instrument pods above their rotors, so they can target enemies from behind cover. Perhaps the solution will be many small drones taking the place of one manned aircraft.
The ME 262 and 163 weren't available until the late war period, long after the Battle of Britain. In 1940 Germany had the ME BF 109, which was outclassed by Britain's Spitfire. The more capable Focke-Wulf 190 entered service in 1941.
From personal experience (anecdote alert!), errors are also common in the ostensibly stone-cold-hard field of algorithms in computer science. A few years back I went on a string algorithm kick, and started dredging up old algorithm papers from the 80's on which to build Wikipedia articles.
Often, the papers would get the general idea right, but if implemented as described would not work at all or fail on edge cases. The best example I have is an algorithm to find the lexicographically-minimal string rotation. The simplest and fastest algorithm to do this is based on the KMP string search algo, and is tribal knowledge among ACM ICPC competitors. I thought it was pretty neat and wanted to cement this algorithm in popular knowledge, so I set about researching and writing the Wikipedia article.
I found the KMP-based algorithm in a 1980 paper by Kellogg S. Booth. The paper has very detailed pseudocode which does not work. At all. The tribal knowledge version I inherited had similarities in the general idea of the algorithm (use of the KMP preprocessing step) but everything else was different. I scoured the internet for a retraction or correction, but all I found was a paper written in 1995 which mentioned in passing errors in the 1980 paper.
I do wonder exactly how common this is. I emailed a professor who co-wrote one of the papers, and he replied that "it seems to me that all
the algorithms (including our own) turned out to have errors in them!" Has anyone done studies into errors in computer science papers?
There is a point of view that says that computer science conferences exist for the purpose of gaming the tenure system. The name of the game is plausible deniability: you're not supposed to submit papers that contain known false claims, but everything else is fair game. And this has become such an integral part of the culture that technical correctness is no longer a necessary condition for accepting a paper . I think in this light it's quite clear why many scientists are happy to leave their papers hidden behind the ACM paywall.
Thank you, that was a fascinating read. It is understandable that technical errors are given a pass, as they aren't the meat of the paper. In the case of the Booth paper, I really should state I do not mean to attack him. The idea of using the KMP preprocess to solve the problem is a wonderful approach and works very well despite the actual implementation being technically incorrect. If I recall, the bug had to do with the termination condition; the algorithm had to run twice as long to terminate correctly. I will say my understanding of the algorithm improved as a result of debugging it!
I think it's pretty common. Here's a note I wrote regarding Luc Devroye's highly-regarded (and excellent!) book on univariate random number generation:
Although not dissenting from the other reviews which tout the comprehensiveness of the treatment and its level of detail, I have to add an unpleasant fact about the algorithms: the codes may not work as written, and if they don't, there's not an easy way to track down the problem. (This is because of the nature of the constructions used in the complex constant-time algorithms -- this opaqueness is not a problem for the elementary algorithms which, alas, may not run in constant time.)
A look at the author's web site (currently at errors.pdf off his main page) shows that, e.g., the algorithm on page 511 [of the book] for Poisson r.v.'s has four serious bugs as originally published. This means that the main algorithm for one of the most important discrete distributions was not coded and tested by the author before the book appeared!
In fact, I believe this algorithm has at least one more bug, because I'm still seeing a small off-by-one anomaly in my implementation. The algorithm for binomial r.v.'s may have trouble as well -- I see problems for N=400, p=0.05. After 10 million draws (i.e., enough to get good statistics) I see deviations of counts in some bins near the peak (i.e. number of integer outcomes of the R.V.) of 8 standard deviations from the expected number of counts. So, be careful, and consider alternate implementations of the more complex algorithms.
There's a lot of detail in the book, and the techniques are valid. But as we all know, implementation sometimes reveals algebra errors!
Implementations of common algorithms are even worse.
The Java binary search implementation had a bug that eluded detection for 9 years, and it was based on a implementation from the 1986 book "Programming Pearls" that also contained the same bug (TL;RD: it's an overflow error that computers in 1986 would probably never have run into - who could imagine having an array with more than 2^30 elements?!).
Even worse: While the first binary search was published in 1946, the first binary search that works correctly for all values of n did not appear until 1962. - and this bug shows it is likely this 1962 version would have failed the same way.
This is a problem in computational electrophysiology as well. There are a lot of typos and other simple mistakes in classic papers. Say you want to implement a finite differences model for a certain type of voltage-gated potassium channel. If you go back to the original paper it's not uncommon to find minus-signs omitted, parentheses placed improperly or other unfortunate bugs. It can take a lot of head scratching and wasted time to get to the point that you can reproduce the figures from the paper!
Granted when something has been in the literature for a long time, the derivative papers and popular implementations (in eg Neuron) are usually right, but there is rarely anything in the scholarly record that documents these errors. It's all tribal-knowldege and side-channels.
Ugh, that sounds terrible. During a previous internship at a HPC company I implemented a computational electrodynamics FDTD algorithm as given in the Taflove book, and I made more than enough errors even without the book containing mistakes! Two fields, each with three components and subtly different equations for all. What a nightmare. Especially since it's impossible to tell what is wrong when watching the EM wave propagate in an impossible oblong fashion during your simulation.
When debating social justice issues, it is wise to bear in mind that deconstructing one particular thesis in a vacuum offers little more than intellectual showboating; much of the social justice narrative describes small and seemingly-trivial barriers cumulatively forming concrete obstacles. For a better metaphor, Marilyn Frye writes in The Politics of Reality:
"Cages. Consider a birdcage. If you look very closely at just one wire in the cage, you cannot see the other wires. If your conception of what is before you is determined by this myopic focus, you could look at that one wire, up and down the length of it, and be unable to see why a bird would not just fly around the wire any time it wanted to go somewhere. Furthermore, even if, one day at a time, you myopically inspected each wire, you still could not see why a bird would have trouble going past the wires to get anywhere. There is no physical property of any one wire, nothing that the closest scrutiny could discover, that will reveal how a bird could be inhibited or harmed by it except in the most accidental way. It is only when you step back, stop looking at the wires one by one, microscopically, and take a macroscopic view of the whole cage, that you can see why the bird does not go anywhere; and then you will see it in a moment. It will require no great subtlety of mental powers. It is perfectly obvious that the bird is surrounded by a network of systematically related barriers, no one of which would be the least hindrance to its flight, but which, by their relations to each other, are as confining as the solid walls of a dungeon."
This myopic focus seems to be more common on technology forums than elsewhere. I am interested as to why this is the case. Probably, a technical education lends itself well to analyzing the validity of individual details but not reasoning at a structural level.
i love the analogy, but you're overthinking it. most people on technical forums have never been poor, so they've never been in the cage and have only heard about second-hand.
i didn't understand what poverty was like until i started smoking weed all the time, neglected most aspects of my life and got heavily into debt. i still wan't poor, but i started to understand why people without money, who are constantly hounded by debt collectors, tend to focus so much on the short term.
Just to lay my cards on the table: I'm registered as a member of the US Green Party and have been involved with various left wing activist movements since I was fifteen. I've also been poor, living in a junkie hotel and doing day labor for cash. My most upvoted comment on hn (before I ragequit my previous account (for basically leftist grievances (!))) was one describing what that was like.
Still, I find the intellectual attitude that you've described deeply unsettling. As you've sketched it, the "social justice narrative" is unfalsifiable: what claims does your theory of politics actually make about the world if any given piece of it can be overturned without making a dent in the theory itself? A theory of social inequality can't be correct in general without being correct in particular cases. Refuting (not "deconstructing") a particular thesis is not "intellectual showboating", it's engaging in argument. What else is someone who sincerely disagrees with you supposed to do?
I've seen this happen with depressing regularity: a self-styled social justice advocate will make a claim, and sometimes that claim will get demolished by an intelligent opponent. (Yes, this can happen, and it is a portent for the future of the left that most activists never learn to take a drubbing from a perceptive conservative.) Rather than taking stock at that point, the social justice advocate throws up a polysyllabic ink cloud ("institutional", "systemic" and "societal" seem to enjoy heavy rotation) and jets away. Whether it is true or not that nebulous social forces conspire to constrain outcomes for the poor in the way that the social justice advocate believes, I have no idea why such verbal behavior should be considered convincing.
This is especially frustrating if you actually sincerely think that the left has good ideas about social policy that should be argued for in earnest and implemented.
 The heavy rhetorical weight placed on the word "narrative" also gives me goosebumps. What happened to plain old arguments-- a set of propositions intended to establish (or at least raise the probability of) the truth of a conclusion? I would hate to think it's because "narratives", unlike arguments, are impossible to demonstrate or refute.
The key word here is vacuum. It's pretty easy to come up with compelling arguments against individual examples of inequality, but these really only hold up on their own and yes, fall apart at the structural level.
I agree retreating to argue on the structural level is rhetorically weak. That's all that can really be said against it, and it's unfortunate, because someone saying how really you are a member of the oppressed (as seen in white supremacist and MRA groups) is much more alluring than some intangible narrative about structural inequality.
I say narrative because that's the form it has taken, and indeed had to take. Mary Wollstonecraft couldn't spend her time arguing point by point in a council chamber. She wrote books, compelling novels which had to fight viciously for every inch of ground on which they stood.
You've seen how easily people convince themselves that gripes of oppression are baseless, today, 200 years after A Vindication of the Rights of Woman. We've had academics write and study and provide rhetorical tools for generations now, building a narrative, because that is what is needed. To truly argue against it, it must be met on that same narrative ground.
It's the old statistics vs story narrative. People don't relate to stories of the masses. They can relate and react to the story of an individual though. So governments and organizations need to be focused on the broader statistics to know where to focus their efforts, but if you want to reach individual people you have to relate to them on a personal level.
I was a Microsoft intern last summer. They gave all the interns a Surface Pro at the end of the internship. As a student, I've used my Surface to dodge any use of paper this semester. OneNote + Wacom stylus is a great combo.
Galaxy Note 8 does not run Matlab, Mathematica, AutoCad, Visual Studio, LaTeX, Linux, Virtual Machines, or any of the stuff that I used throughout my college life.
Being a student is _more_ than just taking notes in class. First and foremost, I need to be able to do my homework... and for an Engineer, that also means being able to use high-powered mathematical programs.
Well that is a relatively specialized requirement. For most students their computer is email, web, photos on the web, facebook, word processors, textbooks and some light spreadsheets. More specialized students can spend $800 more to get something like the surface pro 2.
* Graphic Designers require Photoshop... maybe Maya, 3d Studio Max or Blender as well.
* Other art majors will benefit from pretty much anything in the Adobe Suite: not just Photoshop, but Premier, After Effects (for Video Editing).
* I'd bet that a Communications major would use the same tools as art majors / graphic designers use.
* Architecture majors require CAD of some kind. Ditto with Landscape Architecture majors, Civil Engineers and Mechanical Engineers. Building homes and testing how they look like in virtual environments: turns out to be useful ya know?
* Everyone in the Biology field uses some form of statistical software. (The one that was big in my school was SAS, but IIRC there are lots of competitors)
* Actually, to hammer the above point even more, any subject that ever touches upon advanced statistics more or less requires training in SAS. So not only Math/Statistics majors, but Business, Sociology, Psychology, Economics, Health Policy (with potential applications into nursing...)... a lot of people use SAS.
* Any "lab" science practically requires Labview. Be it Physics, Chemistry, or Materials.
Basically, every major except English in the top 10 list requires advanced programs that run on Laptops. For many of these purposes, I guess a $2000 Macbook Pro would be usable... but the $899 Surface 2 just seems like a better buy in comparison.
IIRC, even English majors touch upon specialized software in the form of Library Management Systems, but I'm not very familiar with those.
I was, too. I take it to class every day. I used to not take notes at all, but now I do because it's easy. I have all my lecture notes for the whole semester right there. A tablet at a Surface RT pricepoint with a Wacom digitizer will storm the student market.
I think you misunderstood his intentions. You see, fallacies are like Harry Potter spells except their Latin names produce crushing arguments instead of magical effects. You are now engaged in a duel and must respond with your own invocations. This is how debates are conducted.