The tech and products were complex. The turnover rate was high and training new hires was a lengthy process. The new projects coming down the pipeline never ceased (this was during a period where FH/F9-1.1/Dragon/Crew was all under design/development and constant iteration).
It was fun for a young engineer, but burnout is real.
*WarpDrive was actually pretty impressive given the amount of stuff that it did.
"We're paying more for programmers than almost anyone else in the industry!" is an obvious thing a bean counter would notice and point out. Productivity is harder to measure, and all that time lost to training on the job doesn't immediately leap out in spreadsheets because it's blended with actual work.
I can't speak to development specifically, but I looked into one of Tesla's devops offerings and I make slightly more for a mid-level engineer than they offer for a senior position. I also 'only' work 40-hour weeks and my cost of living is about 20-30% lower than there.
But no employer tracks that, not even in academia...
I can’t find the reference I’m looking for, but cost associated with turn over rate due to narcissists is apparently as high as taking care is ASD. I would imagine turn over rate associated with burnouts not due to narcissism, but simply poor & short term management & vision to be at least on par.
Well they landed a rocket on a barge in the ocean so something about that model must be working right
To put it another way: If your method of writing novels is to hire an infinite amount of monkeys and put them to work on typewriters, you can't say "Something about this model must be working right, I came out of it with the complete works of Shakespeare!"
They landed a rocket on a barge in the ocean. Maybe with a better process, they could have done that two years faster, for 1/100th the cost, with no burnout. You don't know, and you can't say the model works right just because there's something to show for it.
All you know is that the process is able to eventually land a rocket on a barge. It doesn't tell you whether it's good at it.
So while it may not be the most efficient process, the overall process is much better than their competitors since they can launch for so much less money.
I don't think you can answer that question by just looking at Tesla.
The whole company seems to be operating in the "burning the candle at both ends," not just the workers at the bottom. Also, it's not just "saving money" but pushing super hard to accomplish something extraordinary, i.e. generating new revenue, not just reducing costs. Additionally, the workers are partially compensated via stock options, so they share in the success of the company even if not through higher wages alone. So I'm not sure "mistreatment" is the right word to use.
At the end of the day, SpaceX (and Tesla) are not for everyone forever. I am not in a station in life to want to join right now, but may in the future. And maybe this strenuous effort is not especially profitable for SpaceX because of the churn that it creates. But that churn IS helpful for the industry (and thus, in my opinion, society) at large because it has spread SpaceX's know-how throughout the US aerospace community and resulted in alumni founding probably dozens of companies that can leverage the lessons learned from SpaceX. But some people work well in that environment and stay long term (which isn't to say it can't be improved).
So I am glad SpaceX is the way it is, and I hope they're successful in the future. But it also doesn't have to be the model for everyone else to copy. It might not work for everyone else, nor should it be expected to.
I wonder if the constant burnout and churn keeps employees from vesting and thus ever collecting much if anything in stock?
I'm refuting the statement that "saving money by paying peanuts, grinding people down to burnout, and then constantly having to rehire/retrain new people as the old ones leave" is unanswerable in the current context based on 1 company especially because this company seems to be destroying their competition.
If they have a process proven to work, in a world where they are already doing things no one else has been able to do, changes to that process should be introduced very slowly.
Get a lot of monkeys and put them at work. They will produce something. Better have something than have nothing.
While touting the purchase cost or energy benefits, these ideas routinely ignore the overhead cost inherent in a distributed system, let alone the Fallacies , which is the GP's and OC's (and possibly your) point, I believe.
If other high end software jobs are paying the same for 45-50 that Tesla is paying for 60, that’s on the low side hourly, but the low side of the high end.
75% (45/60) of 150k is still $112.5k plus presumably good benefits and some sort of equity component. That’s damn fine compensation for someone fresh out of undergrad even in 2018.
I wouldn’t want to work 60 indefinitely even for great pay, but that’s a separate issue.
Maybe I am just lucky but I have worked for 2 of the major big tech companies and came out of college making 100k+ while primarily working 40 hour weeks at both.
> 75% (45/60) of 150k
Tesla new grad software engineer total comp is 150k? Damn, in that case they are pretty close with big tech (amazon is 145k and G/FB is 165k from what I have heard). I assumed it was lower since my brother was a PM with 6 YOE and got payed 130k a year.
> That’s damn fine compensation for someone fresh out of undergrad even in 2018.
Oh totally, my girlfriend is probably going to make like 60k out of grad school. However, while it is much better than anyone besides what my finance friends are making, that does not mean they are paying well relative to the tech industry.
In my opinion, this is the biggest flaw of the system. For many investors, a company is less about what it makes and more like a process to grow their money. Even when a company becomes profitable, there's always pressure to make it even more profitable quarter after quarter.
Would you keep your retirement funds in a company that doesn't grow?
Not everything needs to grow into the sky.
It would be worse than buying a bond: you'll get the risk of equity with the returns of a bond.
(Then again English is not my first language.)
While you might need to pay more and improve working conditions, would you be more efficient given the individual staff would spend more time being productive?
Agree about WarpDrive being pretty amazing for all the stuff it did. Although amazing things tend to just clump up from all the features that you need, and you end up with an app that is hard to manage.
It wasn't all bad. There were other groups at X that provided pretty amazing tools for people to get things done (thanks, @cbanek and friends!)
Soon after he was ousted and Thiel was made CEO. Interesting to see he's still pushing Windows.
Source: Paypal Wars by Eric Jackson
That's still true today and it will probably always be true unless Microsoft ports Visual Studio one day. It's not a super great reason to choose it as an operating platform though. :\
My jaw hit the floor the first time I heard this. Why Linux instead of an RTOS?? Apparently Tesla's autopilot also runs Linux, which seems like a huge accident waiting to happen (pun intended).
These things frighten me everyday. What frightens me even more, is the people who work on autonomy without a real grasp on determinism. It's unfortunate that the people who have the most high tech backgrounds (phds in computer vision AI, etc) applicable to autonomy, have never implemented safety critical autonomous systems outside of a research project that tested some aspect of detection or control in a test environment and had to only work once to get a paper published.
For general robotics linux is great. But there is an enormous difference between a robot roaming around your house bumping off walls, and a vehicle carrying a whole family at 70mph.
Most of the linux based systems I have worked with have some form of redundancy, whether it be other chips running linux, or ideally ECU's running an RTOS that perform monitoring, gating, and/or some level of safety fallback control. The RTOS based redundancies often are what provide ASILD. Trusting a single linux processor is what everyone does to get funding, but when you go out and test on public roads with human lives at stake, or start selling a product, you better have some quantitative guarantees other than "It's been fine so far..." That kind of stuff makes me angry.
But on the other hand, has any Tesla car ever had an accident because of this? At some point, "heavily tested and validated end to end in real-life conditions for years" and "formally proven on a simplified model using reasonable assumptions made by human engineers" become relatively close in terms of how much trust you can put in a system.
But somehow we tend to prefer to later. I am not sure if this paradigm is still relevant these days.
That seems like a more wordy way to state "It's been fine so far".
> "formally proven on a simplified model using reasonable assumptions made by human engineers"
I didn't know about that. What kind of formal proofs did they do? Did they involve the linux scheduler?
I guess you'd need three separate implementations to achive some redundancy when you can't just slam the emergency brakes if the systems disagree.
(... For some reason, I find it higly demotivating that another team is doing the exact same thing. Maybe I just want to be a snowflake...)
I’ve seen the same pattern a few other times. Slow, hand built, rad hard systems CAN be more stable and demonstratively safer... but that is rarely the case and the effort required to get such a system right is orders of magnitude greater than using standard “undeterministic” systems. That engineering effort can be better spent innovating and building fundamentally more advanced solutions.
Just my experience. Just my opinion
I haven't messed with an RTOS before but have done some fooling around with scheduling on microcontrollers and I can see why linux is tempting for ease and speed. But we're talking rockets and self-driving cars. These things are expensive as hell, can easily kill people, or both. It seems like the exact sort of place you'd want to take the time and effort to be sure.
Why would anyone need "sub millisecond control of Traffic Lights"?
Traffic lights are mission critical systems, of course, but even millisecond precision should be more than enough, and possibly even 0.5x-1 second precision...
If the connection is lost, the exchange can just fallback to "naive" mode.
When I was a kid there was one light that, when you drove over the pressure sensor, it wouldn't really do much. But if you backed up and drove over it again it must have registered an additional car coming through and the light would almost immediately go through its light cycle to change. It was really interesting to see!
Nowadays I think it's mostly cameras? We have a light near my home and the left signal will literally never trigger unless someone is in one of the left lanes.
Look up inductive coupling. The basic idea is that a changing (AC) current in a conductor generates a changing magnetic field (Ampere's law). This changing magnetic field then induces a voltage in the second conductor (Faraday's law). This is the principle behind how transformers work.
The Wikipedia article https://en.wikipedia.org/wiki/Induction_loop#Vehicle_detecti... has fairly extensive details on modern implementation.
 These do exist for weight-in-motion systems, however.
Out of curiosity, why do traffic light controls need to be that precise?
"Ironically, the biggest concern with red-light camera systems is that they are so precise. They measure a driver’s speed and exact location within a fraction of a second — but do not leave any wiggle room for the errors of traffic signals such as inconsistent yellow light times"
If there's not an accuracy threshold for safety reasons, there's gotta be one when traffic ticketing revenue is on the line (also I guess determining fault at accidents, vehicular manslaughter cases, etc.)
still, revenue driven development actually makes sense as a thing too ...
Why arent their efforts to create "autonomous only" traffic managment scenarios, where people drive into a given, known area, and the area then takes control of managing the traffic and vehicles. Such that you relinquish control of the vehicle to that area's control system, with your destination stated and then your vehicle is managed accordingly.
For example, a parking lot for a really large venue with an autonomous valet system.
YOu drive up and get out and then the system takes over your car and drives off iwth it and parks it and you recall it when needed...
Or managing traffic in a very heavily trafficed bottle-neck of a grid; such as the baybridge merging egress from SF financial district.
If you put in your destination, and join the group, all the cars could then be managed for getting onto the bridge more rapidiously ...
Autonomous doesnt need to drive me from SF to LA, but it would be great if an autonomous hive mind could get all the cars to up throughput in given situations, no?
I seem to recall that similar ideas go back to the early 1990s, at least, for highways: Drive your car to the entrance ramp, plug in your destination, and the autonomous system takes it from there.
But for many of these things, such as the Bay Bridge or a highway, it seems like there is a simpler solution: Put the cars on a train and take them across by rail. I suspect I'm not the first person to think of it so I wonder why it's never been done (i.e., what problem I'm overlooking).
* Full disclosure: I ride bicycles and only plan to ever live in localities where I won't need to purchase an entire car.
More cars, or even car-friendly tech will never be the solution for the issue of too many cars. See also "induced demand": https://en.wikipedia.org/wiki/Induced_demand
Not just for cars, but also for cargo... just have a constant gondola-like conveyer that detaches a platform from the line to slow it enough to allow for cargo to get on, then re-zip-it backinto the line and speed it along, de-rail it once it hits its exit/location...
ideally though, in cities, there would be no surface streets and all cars would have their own level below that of bikes pedestrians.
What would SF look like if a superstructure was built above all streets and all pedestrian and bike traffic was moved up there? (sure, SF may be a poor example, so just select [city])
Look at Singapore's vast underground connecting malls between facilities. Those are pretty amazing.
I grade US urban/city planners rather poorly.
The street level would be dark and storefronts would become difficult to access. If the stores moved up to the 2nd floor (a massive transformation of real estate, probably greatly reducing available living space), what would go on the first level? Not many people would want to live in the dark.
The results are mostly "garden apartments" which are damp, dim, and slightly less expensive.
Besides, the best integrated transport solution in the world already exists in places like Utrecht, Groningen and Assen thanks to reforms that started decades ago.
On tracks they own.
With no other traffic than theirs.
And unlimited funding.
And much less apologetic about wielding power.
Lots of mines start from little companies searching for a possible ore body (the idea or market fit), then raising money to perform a closer survey (seed funding). If the closer geological work is promising they often obtain a lease (patents or other IP).
At this point it goes one of two ways. Either they raise enough money to start and operate the mine themselves (series A, B etc, leading to an IPO) or they sell the prospect to a major company.
Then the newly-minted millionaires, who know a lot about mining, invest in the next crop of junior miners.
So as with tech there are conceptual, exploratory, growth and liquidity phases, followed by a process of reinvestment.
I remember realising this when living in Perth and being frustrated that, with quite literally billions of dollars sloshing around the city looking to invest, you'd be hard-pressed to pitch anything smarter than a brochureware website to the local investment class.
There were other structural problems. Stock options are not A Thing for various legal reasons. Failure in starting a high-risk business is a bit of a black mark. There are VCs but so much of their money came from governments trying to jump-start a market that they were about as risk-taking as a loans officer at a bank (what government wants "10 MILLION WASTED ON PHONE APPS" as a headline?).
Meanwhile the super funds are collectively sitting on trillions of dollars and investing an absurdly dumb fraction of it in the ASX. Putting just 0.5% of their holdings into VC would unlock tens of billions of dollars of potential investments.
For which, hey, VCs who lurk here and want to raise a fund: go talk to the Australian superannuation industry. It is a massive pool of underperforming cash languishing in the same dozen public companies and, because Australian law forces all Australians to set aside at least 9.5% of income for retirement, the industry will never stop having incoming funds. There will always be new money to raise and it will probably the 2nd largest pool of pension investments sometime in the next 10-15 years.
I will accept finder's fees and/or massively remunerative job offers as reward for this insight.
I mean, heck. If it were that easy to start a unicorn...
But seriously - I am not saying its easy, I'm just surprised that we haven't made much (publicly known about/announced) efforts along these lines.
I mean, we have TCP down pretty good - if we are simply thinking of cars as packets, a lot of the math should exist to ensure collision-less delivery?
Neither of these models is really analogous to cars on the road.
But applying collision detection and exponential back off in road traffic is a "fun" thought experiment.
A more apt model would be critical sections and semaphores from concurrent programming. Which is named after a collision avoidance scheme used to control trains. And we all know how difficult concurrent programming can be. I don't want traffic with deadlocks, starvation, busy waiting or live locks.
You better hope you don't get any dropped packets.
I don't think there is any possibility of large scale autonomous driving without a shared control infrastructure. Autonomous driving will only work as long as autonomous cars are a small minority.
As soon as they stop being in the minority, some shared control infrastructure is necessary.
Case in point: 4 cars arriving in a no-lights 4 way intersection simultaneously will cause a deadlock. A tie breaking scheme requiring some form of communication is necessary.
Even if manufacturers would somehow manage to agree on an API, it could then be "abused" by competitors or accessory vendors to sell their own customized car assistants, which would instantly work with any car brand - without them having to negotiate with the manufacturers.
I fear we will sooner have a usable open IoT standard than manufacturers giving up that level of control.
0. which infamously occurred on the Mars Pathfinder. https://www.rapitasystems.com/blog/what-really-happened-to-t...
Any decent RTOS should have Priority inheritance that should avoid this.
Pointing to this one things as RTOS issue isn't really an accurate portrayal of current RTOS capabilities.
Priority inversion had been known about since the 70s. Priority inheritance seems to have first been proposed in 1990:
https://www3.nd.edu/~dwang5/courses/spring18/papers/real-tim... (Priority Inheritance Protocols: An Approach to Real-Time Synchronization)
The Pathfinder engineers were apparently unaware of the priority inheritance option available in VxWorks until they had to debug the issue live from a few hundred million km away.
Also in my experience it might have been easier to reach the ASIL-D requirements, using a smarter combination of a Limiter on RTOS and using more generic code on something like linux for more of the code. This probably also would end up in more used and tested applications reaching more stability. (That's is partly outside ASIL-D).
Functional safety and ISO-26262 is much misunderstood in automotive development and architecture.
Also imho the certifications, well with out the safety case are kind of useless. You still have to make the assessment how you will find the problems with it in your use case. That might ever so slightly differ from what they certified. The automotive industry thou loves to have someone else to blame, e.g. the supplier of the RTOS, Compiler etc. Using Linux makes the blame game hard.
Here's what's inside of every autonomous vehicle ever made: a message-passing subsystem, sensors, fusers, navigation, dynamic control, actuator device drivers, and thruster device drivers.
Sensors measure things and emit readings. Your most expensive, highest frequency general purpose sensors emit new readings at something too fast for a human but hella slow for a computer, like 100Hz-5KHz. Your common sensors, a video camera for instance, don't get even close to that. These sensors are often connected, even today because milspec companies hate modernity, via RS-232 serial cables. For those younger than 30, RS-232 is what non-Apple computers used for non-keyboard/mouse peripherals prior to the introduction of the first iMac in 1998 because USB didn't really take off until then.
Sensors send their readings via the message-passing subsystem to fusers.
Fusers take the readings from the sensors and, hur hur, "fuse" them together into a description of where the vehicle is and what the environment is like. This usually involves something like a kalman filter. Fusing even your very fastest sensors, the 5KHz IMUs of the world, is just a small bit of math and basically takes no time at all.
Fusers send their fused states via the message-passing subsystem to navigation.
Navigation takes the fused sense of self and the world and decides which direction to head and how fast to go. The objective could be something like hitting route waypoints or it could be something like staying in a lane and not being rear-ended and avoiding obstacles. Car navigation probably doesn't act on new input more frequently than 100Hz, you certainly can't act on new input more frequently than 100Hz, and it takes basically no time at all.
Navigation sends its directives via the message-passing subsystem to dynamic control.
Dynamic control takes navigation's "which way" and "how fast" directives and turns them into more realistic short-term goals accounting for hysteresis and other physical limitations of the system like minimum turn radius. This is just a small bit of math and basically takes no time at all.
Dynamic control sends its directives via the message-passing subsystem to the actuator and thruster drivers.
Actuator drivers convert dynamic control's "go more left" message into trying to go more left.
Thruster drivers convert dynamic control's "go more fast" message into trying to go more fast.
Actuator and thruster drivers send readings (hopefully) from the actuators and thrusters, because those are also sensors, back to dynamic control and fusion.
Sensors feed into fusers, fusers feed into nav, nav feeds into dynamic control, dynamic control feeds into actuation and thrust. When you have new data, you do something new with it which is technically doing the same old thing with it and just producing new output.
Now there aren't that many sensors. There are way fewer fusers. There's only one navigation. There's probably only one dynamic control, though there could be a couple.
Anything else that I haven't already described, like Waymo's machine learning object classifying 4D mustache adding hotdog detectors, are just sensors and fusers sitting on their own computers feeding new lat/lng/heading/speed to navigation at a rate that is hella slow for a computer. And for sure Waymo's convolutional neural network middle-out jaywalking yoga mom detector takes a lot of processing, but it's running on its own computer, not competing for resources, and emitting its fused readings at some hella slow for a computer rate.
This stuff really does get complex. A sensor controller will likely be on multiple cycles internally: one for oversampling the sensor hatdware and one for transmitting the (filtered/corrected/calibrated) results. A "fuser" as you call it (never heard that term before) needs to make sure that it does never act on stale sensor information (sensor malfunction, accumulated communication issues). Transmission errors need to be detected. Random bitflips in values that are stored in volatile memory for long time spans need to be checked and acted upon.
Every independent controller in such a system requires some kind of watchdog that needs to be reset periodically. Too many watchdog resets in a row indicate a failure and the affected system must shut down in a defined way. You need ways to deal with any combinations of controllers going belly up and avoid taking unsafe actions. For many systems transitioning into a totally inert safe mode is sufficient, but not always.
All of the hardware must constantly run self tests. That includes periodic CPU and memory tests (both volatile and non-volatile memory) and also all periphery that us involved. If, for example, a DAC is used to send a signal, the resulting signal must be read back by different hardware to check that the generated voltage is indeed correct.
Manually threading together all these different kinds of cycles and asynchronous events without a RTOS scheduler is hard and becomes error prone. The result is likely less resilient than a preemptively multithreaded firmware.
An operating system, real time or otherwise, handles activating processes, IO, and interprocess messaging. That's it. You don't get magical serial line noise clearing pixies with it, and it doesn't make your actuators less drunk.
> Manually threading together all these different kinds of cycles and asynchronous events without a RTOS scheduler is hard and becomes error prone.
You just said "getting input and sending output is way too hard for software on a Linux kernel." That's a crazy person statement. It turns out that Linux is, and has been for a loooong time, very good at doing operating system things like activating processes and interprocess messaging.
> The result is likely less resilient
Saying "likely" here suggests that you don't know what the result actually is. So what are you arguing? What was my statement?
Whatever assumption you're making that says "This. This right here is the reason why we definitely need an RTOS." Just don't make that assumption. That assumption is wrong.
So to me, "you don't need an RTOS" means that you're running on bare metal. And that would be hard to pull off for the reasons that I outlined above. And I think this is where we ended up misunderstanding each other.
I enjoy the kinds of restricted RTOS environments that we use because their simplicity means that I can get a total understanding of what is going on quite easily.
This does not mean that
Linux is completely inappropriate for real time tasks. I am sure that you could analyze and patch the kernel to match pretty high standards (others mentioned patches). Given the relative size and complexity of a Linux system, this is no simple task. But if you run it on appropriate hardware (not your run of the mill x86), I don't see why you couldn't get reliable realtime responses.
But safety essentially means that the software will not fail more often than once every x hours where x ranges between 10^5 and 10^8, depending on the level of safety required. Proving that for a complex system is hard. For example, how do you show that the essentially indeterministic pattern of dynamic memory allocations happening in a Linux system will never lead to memory exhaustion by fragmentation?
I know of no version of the Linux kernel (or GCC, for that matter) that got a functional safety certification. Safety standards are transitioning away from allowing positive track records as sufficient proof that a piece of software meets safety standards. Do-178 now only allows certified software AFAIK and I expect this to be carriedover into ISO 61508 and ISO 26262. This means a regulated development process, pretty strict coding standards, complete test coverage, full documentation, and so on also for all 3rd party software. Not sure how this transition is going to play out in practice.
Do you know any ASIL-C or ASIL-D (or SIL-2/SIL-3) software that is running on Linux? I am curious whether anybody managed to get that certified. I know that Linux is running on some class II medical equipment, but then, standards for these devices are inexplicably lower in practice in my experience.
A stock modern Linux kernel on almost any hardware platform will give millisecond level responses. Much of the old PREEMPT_RT patch set features from the old 2.6.x days for real-time response has been merged to the mainline kernel.
There are lots of problems in software where you are controlling something physical with a control loop from 1 to a couple hundred Hz. Many people assume a hard real-time deadlines are necessary for this sort of system, but through good system design practices it often is not necessary. For example, if something physical must be sampled with very low jitter, let some hardware do the sampling and latch it in a register and then let the software come in with a variance of hundreds of microseconds to get its work done. Once again, write the output to a latched register and let hardware worry about taking the shadow register with very low jitter.
Having worked on bare metal microcontrollers, to various RTOSes, to higher-performance embedded CPUs with Linux, I prefer Linux on higher-performance hardware. Obviously, this isn't always possible, especially in power constrained situations. But with Linux, when you suddenly need to have support for an arbitrary network protocol, a database, a filesystem, graphical output, etc. you can have something together in no time. It is often a monumental effort for such a task when bare metal or with a RTOS. It is often difficult to get the supporting software and libraries to build on an RTOS in the first place.
maybe you don't, but genuinely curious how do you validate/guarantee scenarios then?
1) What concrete guarantees do you think you get from a special RTOS?
2) Which of those guarantees are meaningful to the scenario?
3) How (by what mechanisms) do you think the special RTOS guarantees the things that it guarantees?
4) Which of those behaviors are something that only an RTOS can provide?
Given all the wtf stuff in TFA, it might very well be...
Same reason SpaceX eschews radiation-hardened processors for redundant off-the-shelf cores: supplier competition. There aren't many RTOS engineers on the market; there are many Linux engineers. Once they got over the cost of hardening the kernel, SpaceX found itself at a scaling advantage versus RTOS-based competitors.
At least with Linux you're getting a system that's been used so much that all major issues like that are ironed out. Nothing beats a few million testers.
edit: I saw the same on a fleet of thousands of JVMs which hung on 100% CPU after 248 days very consistently. Closest thing to an explanation I ever got was perhaps it is storing uptime in hundredths of a second (why not ms???) in signed 32 bit integers, see: https://ma.ttias.be/248-days/
In the end we solved it by restarting with a cronjob between 2am and 4am after 247 days...
There aren't many Linux engineers who have experience with resource-constrained systems or real-time programming requirements.
A devil-may-care throw-caution-to-the-wind cavalier attitude?
I stand corrected. And I'm never buying a Tesla or property even remotely close to SpaceX launch sites...
Although I don't know the market. It might screwed by some weird market dynamic.
Not to worry, the tight control loops the require determinism all run on Arduino boards.
You don't (want to) make huge changes to the kernel and libraries codebase tho, even if the changes are meant to remove code you don't need, because testing a heavily modified OS gets prohibitively expensive.
Especially on "modern" embedded from the last 10 years were RAM and storage are not that limited.
Im asking because I run few instances with very heavy traffic and have no issues whatsoever. Just added nano and fail2ban and it runs with no issues for about 2 years now.
Basically you rip out anything not strictly required for the task at hand.
Running on embedded hardware is quite different from running on server hardware, disk space and memory are measured in megabytes, not gigabytes...
Obviously you can go much slimmer, but a $10 board is surprisingly capable.
EDIT: This is meant to be a bit tongue-in-cheek, but I seriously do prefer Alpine over literally every other Linux distro I've yet seen for minimalism. Also geared towards embedded-type work.
Did you know that Linux can handle hard RT, if you use the right hardware (and maybe the right kernel/patches).?
So yes, I'm sure that Linux can work. But it will be difficult to prove to auditors that it will always work.
Besides, if you follow Linux kernel development, you see that the effort is virtually never for real-time but for general purpose.
There is no reason per se to attribute any downsides onto these "filters".
Does TFA (which sure, is for Tesla, but same leadership) describe an environment where people use "the right hardware"?
Regardless. There are many and far better alternatives to Linux for real time applications.
People are using Linux when they need HW and driver support, e.g. gigabit Ethernet, firewire and such. RTOS vendors charging shitload of money on those drivers.
I trust the Linux drivers more than the RTLinux scheduler or libc. But well, recently networking went to hell, so even there they start fucking up.
An RTOS has nice guarantees, and I definitely see the appeal, but on the other hand:
- SpaceX machines receive more irradiation that computers on the ground, so computation errors already make the behaviour of the software chaotic,
- The wealth of complex, widespread libraries helps a lot.
For instance, did you know that Windows XP has a long history of being used in military embedded devices that store user data (on a writeable, obviously, FAT32 file system) where the way you turn them off is to just cut the power? I shit you not. I've seen state-of-the-art Navy-used sonars where the internal computer was running Windows, and you would transfer data off of the internal hard drive by FTP over Ethernet, and it had no on/off switch, just power or no power.
Why would you want to run backoffice on Linux and then re-create all those wheels by hand in-house? Relying on the expertise of other companies for basic backoffice systems is actually recommended practice until you become big enough to actually need custom software (generally, north ten thousand employees).
The former is generally a given for a company of any significant size (employees or business activity). The latter is unheard of for most backoffice functions (other than specialized accounting and finance functions) since it's a waste of money and would place the company at significant legal and regulatory risks--it would require effectively becoming experts in accounting, HR, etc.
This applies even if you need heavy customization. In fact, it applies even more--since that level of customization usually means sufficient complexity of backoffice needs that only the pre-built service providers will have the sufficient depth and scope to cover you.
Even dynamics isn't .net
It seems as if there were two distinct cultures of engineers. Those working on workstation-grade hardware networked over TCP/IP (whether running proprietary UNIX, open source UNIX, or Windows NT) -- and Java emerged out of this.
The second cultures were developers building mainframe applications; usually they would be ones working on problems related data processing, planning, and automation for businesses (not just enterprises but also many SMBs, government organizations, hospitals, etc...)
Java clearly emerged from the first culture being built by a vendor of networked UNIX workstations. Some of Java's most memorable failures - either exceedingly complex and brittle systems like RMI, JMS, and J2EE (I mean this literally: not modern Java EE like Jersey/CDI/etc... but EJB 2.0) or features that were in retrospect far ahead of its time (JINI or JXTA, compare with consul/etcd/zookeeper and the idea of a service mesh today) came as an attempt to commoditise approaches commonly used by the first group as frameworks for solving the domain specific problems of the second.
(1) Stable platform that's backward compatible over long periods of time.
(2) Very good rapid application development tooling, e.g. Visual Studio which is probably still the best IDE overall.
(3) A huge trained developer base making it easy to recruit. Same goes for IT personnel.
(4) A huge pool of software, custom dev firms, etc.
(5) Certification for US DOD and other certification-heavy environments where Windows is used heavily, which may be important for an aerospace company.
(6) Integration with everything in the business and government world is already done.
(7) Windows has a lot of complex user, permission, and policy management stuff. Active Directory is The Standard for UAM in the corporate world.
The cost of Microsoft licensing is chicken feed compared to the cost of building and launching rockets.
Overall I don't think it's a bad decision. Not everything is an Internet startup or hacker project. Right tools for the job.
I've always wondered why there's such a small list of decent IDEs for C anything.
I usually just stick to vim and a handful of plugins.
For instance, try writing an auto completion tool for C++ and Java. The first one take orders of magnitude more work for a mediocre result.
Using vim is symptom of the problem. The available tooling is so bad or non existent that it's comparable to a text editor.
They could have bought something off the shelf. Not sure what was the value in building everything from scratch
What I don't understand is why the UI sucks so very, very much every single time. And why it's so very, very slow. It seems like it has to be on purpose. Can anyone with insight explain it to me?
I also think it is a cultural. See SAP's blog entry "Why users might think their SAP user interface is crumby" https://blogs.sap.com/2015/09/15/why-users-might-think-their...
wow - this is surprising to me and wasnt mentioned in the biography - any ideas why ?
Jackson was the guy who realised that PayPal and eBay had massive synergy and worked super hard to get PayPal in there. Eventually leading to eBay buying them out, and Musk and Thiel going from merely rich to actually billionaires.
Musk has even admitted as much that he prefers game developers. Maybe he see's the parallel for working overtime and uses this "perk" to his advantage? From an article from 2015:
> "We actually hire a lot of our best software engineers out of the gaming industry," said SpaceX CEO Elon Musk, when Fast Company posed this question during the May 29 Dragon V2 unveiling. "In gaming there's a lot of smart engineering talent doing really complex things. [Compared to] a lot of the algorithms involved in massive multiplayer online games…a docking sequence [between spacecraft] is actually relatively straightforward. So I'd encourage people in the gaming industry to think about creating the next generation of spacecraft and rockets."
That felt weird to type.
Both are reasonably capable of high service uptimes and solid performance. With Server Core and PowerShell, there's a lot more parity than my fellow Linux admins want to admit, but either is a viable choice for general IT services at this point.
Note - I'm excluding licensing entirely from this, as well as infrastructure maintenance and control surfaces. Nobody likes DSC, and there are several superior config management solutions for Linux that don't have meaningful analogs on Windows.
Back at that time of NT4 & Linux 2.2, I'd argue Solaris was the best option.
That's good to know. When I saw the inside of the Dragon capsule, with its shiny touch controls, I was already imagining Astronauts having to deal with installing Android updates on their control tablets while mid-flight, or some other crazy stuff along those lines.
I find C# and .NET runtime (before it meets windows) quite nice. I'm not a big C#-er myself though.
Maybe my brain just doesn't get it, but the documentation makes me crazy too. I just hate everything about it.
The documentation is actually fairly good in my eyes iff you're only writing applications. As a library and custom control vendor (a position I find myself in at work) it can be atrocious and sourceof.net is hugely helpful (and I still find myself wanting to debug framework source code at times).
Still, if you have specific questions, you can throw me an e-mail if you want.
Once you accept that reactive data models are the 'correct' pattern things become so much simpler.
Throw in JSON.net and QuickType (amazing if you haven't seen it, feed it JSON or JSON Schema and it outputs correct code to serialize to from JSON in about 25 languages pretty much idiomatically (for C# it uses JSON.net for TypeScript interfaces and if you want it runtime validation).
It's a remarkably stable way of hoisting an API.
WPF binds to those objects and when you change the thing via a public property the change notification is fired and the UI updates.
Docs you want are MVVM and particularly INotifyProperyChanged.
I think a lot of the problem probably depends on the work you're doing. I'm an engineer, usually I just want a GUI that shows me the information I need, I don't care much for design aesthetics beyond making sure it's not hideously ugly. Winforms was good for this, but it definitely looks dated and needed to be replaced or massively overhauled.
I can absolutely see how somebody doing more attractive design work would like WPF. It just makes me grouchy. Somebody else mentioned mixing blend into their workflow, maybe I'll take a look at that.
Expression blend saves a lot of typing. Also if you provide design-time data, saves time because WYSIWYG gives faster feedback than change-build-test cycle.
When I work on XAML-based GUI, I open the project in both VS and Blend, and use them alternately.
What OS kernels are actually used that are written in anything other than C? Plenty of them (INTEGRITY, VxWorks, QNX) written in C are used in secure, mission-critical applications.
A language is a tool, and like any tool it can be badly designed and/or unfit for certain purposes.
And this is why Torvalds takes kernel API stability seriously...