I used to be structural engineer, and while things may have changed in the past 8 years since I left, I doubt it.
The problem with structural engineering is incentives. It is one of the reasons that I left. Most structural engineering companies are filled with conservative, boring engineers that prefer to look up pre-designed segments and don't make full use of the steel design handbook or building codes.
For example, in Canada if you have a non-load bearing brick outer face (most brick buildings in Canada) you're allowed to reduce the wind load by 10%. I was the only person I knew that knew this because I actually read the steel, concrete, and wood design handbooks front to back while I made notes. Furthermore almost nobody has read the building code "just because" they might hop to a section here or there when they need it, but they're generally not going to just sit down and read the thing.
So when I would design buildings I would be able to take advantage of a lot more things than most people. This lead to my buildings being cheaper / easier to build, which of course lead to our engineering fees looking like a larger portion of the job.
The problem with reinforced concrete is the same. Engineers have no financial incentive to make alterations to their designs to make the buildings last longer. It is almost trivial to make sure steel wont rust (or to double or triple a buildings life) but it makes construction costs go up 0.01% and makes engineerings fees go up 0.1% so nobody does it. Regulators are to blame too. There are amazing concretes (Ultra High Performance Concretes) we should be using in our buildings that completely lack even needing steel because they are so ductile and strong (MPa 200 for the one I was familiar with, Ductal by Lafarge), but it's impossible to use in construction in Canada because the code is so rigid.
> Most structural engineering companies are filled with conservative, boring engineers that prefer to look up pre-designed segments and don't make full use of the steel design handbook or building codes.
I dare say the same is true of software engineering. I, nominally a backend engineer, know (and apply) more about HTTP than most front-end devs and architects I've met, simply because I sat down one day and read the HTTP spec. (It's not a difficult read!)
A few years back I was building a web server from scratch for my own quirky needs (using, of all things, C and Scheme). It required understanding the HTTP protocol, and I agree the RFCs are not all that hard to read and learning the details in order to apply them.
However what I eventually found out is that the HTTP "rules" were not faithfully followed by many implementations. For example, extra care taken to make sure HTTP headers were correctly parsed just caused headaches. The trouble was that headers received from many origins were "malformed" despite specs saying what a header "MUST" contain or what characters are not allowed.
I know servers are supposed to be "tolerant" of non-compliant clients (and vice-versa), and realistically there's little choice but to go along with "loose" compliance. I've often wondered to what extent that reality contributes to less than optimum security that's so often been an issue.
How many of us actually read the documentation and the source for the systems we use? All the options and flags for jq, wget, socat, ssh, rsync, etc. I am trying to spend about 5 minutes a day just reading man pages, esp about things I THINK I know but actually don't.
In my personal experience, the best engineers I've encountered (and learned from) have understood every system, subsystem, and interaction, all the way down to the most fundamental foundational level. And this understanding allows them to make the best decisions (because they're equipped with the best information). This knowledge doesn't come by sitting down and studying how CPU architecture works when you're building a web application. But it does come from diving as deep as is required for any given task. So maybe if you're dealing with a web app performance bug and you have to crack open Chrome source code, trace it down to something that is compute-intensive, learn whatever C++ code is involved, understand how it utilizes the CPU, and learn about the specific architecture you're using that exhibits the problem, then you have ultimately obtained a significant depth and breadth of information, but at the end of it, you know and understandexactly why your web app performs the way it does, how to workaround it in your app, how to fix it in Chrome (or why you shouldn't), and how the CPU architecture affects the Chrome source code. Now you can apply Chrome, CPU architecture, and C++ to anything that is built upon any one of them (independently or otherwise). That's not to say you know everything about each of them, but you've learned things that will help you in the future in some cases.
The most important skill here is being able to diagnose a problem and fearlessly, relentlessly employ the engineering discipline of solving whatever problem/task is at hand, and not because of observed symptoms ("hey, I turned that knob and everything was OK! I don't know why, but I can close this JIRA ticket and move on with my life. I'm a 10X engineer!") but because you understand precisely what's happening. I made the mistake of the first decade of my software engineering career learning from trial/error and observations, and while those skills are useful in some cases, the best engineers are extremely disciplined about understanding the full depth of a problem before writing a line of code.
In a nutshell, I guess what I'm advocating for is do not blindly study man pages. The reason is because without a practical application for the knowledge, it seeps out of your brain and you forget it quickly. The exception (case in point, GP's example) is when what you're studying does have a practical application or is relevant to what you spend your time doing. This has always been my problem with academic curricula (sure, some people can learn well this way, and there's definitely a minimum foundation necessary that must simply be committed to memory). Even in basic subjects like maths -- the work is rote, and we maybe get a passing grade, but often without the understanding (or the depth of understanding) that is really the most important aspect of learning the subject matter.
I have optimized websites based on a basic understanding of how CPU's work before. For example it's much faster to do 200 checks on an object then load the next object vs doing one check at a time for each object and thus reloading objects 200 times. This ended up being a 30x or so speed up and seemed like magic to half the room.
It's not about knowing the minute details so much as understanding what's going on well enough to model it in your head.
PS: Assuming you are operating on lots of data, a small scale test can and did go the other way.
Good thinking but beware of what a CPU "is". I've just came back from intel.com boards and .. holy jesus, amount of details even memory locality level thinking ignores .. To leverage a processor you need to understand OS cache conventions and interaction with L1 and L2 caches and how these caches are wired to the actual core. Otherwise you're already losing 30% of the raw bandwidth.
I left with a strong laziness view on optimization. Profile based on what the business needs and ignore everything else or you will never escape the rabbit hole.
Most people wouldn't know an algorithm and concepts of algorithmic complexity if it bit them on the ass. Even devs. You don't need a PHD in computer science, just read some stuff and think a bit.
I completely and totally agree with you. I would only add that the best engineers also understand when to take a complex set of interactions and create a black box abstraction from them. They also understand when the abstractions are likely to leak and what the consequences are.
I am not putting that much weight into, 5 minutes a day is not a lot be familiar with the capabilities of the tools. A couple weeks ago, I had no idea that `jq` had a compiler in it. Many of us, myself included, use our tools in very shallow ways.
I think everyone has time for it, but it requires nerves of steel. You are thinking "I could just fix this the easy way", you are feeling social pressure to quickly get to the next thing. It's easy to decide "I can't take the time to really figure this out."
But if you can ignore the pressure and stick to your guns, you end up saving time in the long run, sometimes making orders of magnitude more work possible. Most managers should appreciate that.
But it's difficult to have the nerve to do it, and it can be difficult to explain in the short term. Like most opportunities there's a cost to pay up front.
Would that we as a profession developed an encyclopedia of ways of pushing back on "Is it done yet? How much longer?" completion pressure. There's certainly a profusion of lore about PFYs and lusers, why not structural business frustrations?
Well...yeah...that's kind of the whole basis of 'requirements elicitation', to understand what your client is trying to accomplish on such a level that they don't need to give you a list of tasks, you create the tasks that will accomplish want they need the system to do.
depends on what you consider your job to be, right?
for all I know, historically there must have been a lot of masons asking the same question when stacking bricks: "people have time to do that while building a wall? to carefully put mortar between the bricks??"
but nobody remembers those masons because all their walls have fallen apart by now.
("... wait seriously, even the inner walls?? but the boss never checks those anyway")
_This_ is the right path. Dig as deep as you need. Don't be afraid to get your hands dirty. So many people just randomly fear the 'magic' of the lower levels.
Do you have any tips for remembering the minute details in the manuals? Do you make flashcards, or do you re-read them repeatedly spaced out over time?
I find the volume of overwhelming, but I think I have a practice now that works well for me. Say I want to do something in vim, but it feels clunky. Part of me says "there may be a better way to do this", and I go looking for a way. I usually limit such a search to ten minutes or so. I'll stretch that if I'm getting closer.
It's not a hard science, but I think the two important elements are 1. Being willing to deep dive and 2. Monitoring how much time I spend to allow for reasonable stops. I come back to unsolved issues when they come up repeatedly. That tells me those are more important.
My personal process has a lot of parallels with "lazy" or "short-circuit" evaluation and "greedy" algorithms.
First, remembering the fact that certain information is out there is a lot easier to remember than the actual details of that information. Bits like "zsh has this crazy advanced globbing syntax that obsoletes many uses of `find`" or "ssh can do a proxy/tunneling thing and remote-desktop things with the right options, also it sometimes doesn't need to create a login session and sometimes it does" or "ffmpeg has these crazy complex video filters that allow you to do real cool tricks (therefore maybe the same for audio filters though I haven't actually read about that yet)".
Some of this is man pages, some of this is blog posts or stackoverflow answers. I keep my bookmarks well-organized using tags (in Firefox, Chrome doesn't seem to have tagged bookmarks for some reason, last time I checked). Whenever I find something that seems it may be useful some day, I bookmark it, tag it properly and sometimes I add a few keywords to title that I am likely to search for when I need the info.
Then, given the knowledge that some information is out there, I allow myself to look it up whenever.
I've never been very good at rote memorization, at least not doing it on purpose. I often lack the motivation to muster up the will and focus required. So I don't force myself, but somehow still remember stuff any way.
There's so many tiny things in such a wide field of interests, I don't even really want to memorize all :) So I cut it down to knowing the existence of information (and sometimes, classes of information).
Then maybe some day I'm working with some particular features of ssh or git, and I notice myself looking up the same commands or switches a few times over again. So apparently I'm not memorizing these. Then, I make a note. That's not a very organized system, it can be a post-it, a markdown/textfile, an alias, a shellscript, a code comment, a github gist. I used to try and keep one textfile with "useful commands and switches and tricks and and and", but I found myself never looking at it, so I stopped doing that. Instead I try to put the note somewhere I'm likely to come across when I need it in context.
The way Sublime Text just remembers the content of new untitled textfiles and then allows you to organize these groups of files into projects, quick-switch between projects using ctrl-alt-P, is just perfect (or shall I say, "sublime"?). It allows a random short note evolve organically from temporary scratch to a more permanent reference note.
I also download some reference manuals, so I have access offline, which is often significantly faster to quickly open, check and close. For instance there's a link to the GLSL 4 spec in my start menu, which instantly opens in katarakt by just pressing "alt-F1, down, right, enter" -- a leftover from a project where I was reading that thing all the time. After a while I added a shorter webpage-converted-to-markdown reference to the sublime project file, and now I use it less.
I guess the shorter summary is: Yes I do have tips, but they are what work for myself, but the more generally applicable advice is: yes there are tips and there are tricks and they are whatever works by any means necessary, but most importantly: yes, there are tips and tricks, and some of them will work for you too! :)
RTFM is a weird boundary. I've 'wasted' so many hours dabbling in tutorials made by other people instead of diving in real information: specs, source. It's a mental click, maybe it seems overwhelming, maybe it seems too broad and we're too impatient to read a chapter to get an answer. After a while 1) you get more patient 2) you know other sources won't help ... all of a suddent specs looks like fun reads.
ps: I was just on www.ecmascript.org/es4/spec/, historical artefact but full of surprises.
I'd say it's the exact same problem. The structural engineers are looking at these pre-designed components as black boxes, not bothering to understand why they were designed the way they were, what they do internally. A huge portion of software engineers sees the components they work with (such as HTTP) as black boxes, too. This means that when the engineer is considering its use, they cannot effectively consider how it will hold up in the particular situation they are dealing with.
You need to know at least nominally how the sub-components interact, or you can't predict how something will perform when you use it. Even the strongest abstractions leak a little bit.
I'd just like to mention one counter argument, which the rigidity of the code in some cases will help to protect against developers from using cheaper materials, or new materials that on paper seem better, but in reality may not be better.
An example I've heard of but am having trouble searching for the exact name, is in condo buildings here in Canada they started using a new piping material, to do the water deliver inside the units. The problem was, while I think in theory the material was better, it has the property that when it fails it fails catastrophically, due to either a manufacturing or installation defect. So instead of just having a small pinhole leak, the piping will split when compromised, and you have many thousands of dollars in damages to multiple units. Buildings with this material can no longer be insured in Canada.
So I don't know that I would really trust giving developers a wide latitude in selecting materials, even if they sound great on paper.
I don't know what the solution is, because I agree we need to be more flexible, and have a way to introduce new and better technologies, but we also have to be diligent, in ensuring that the new technologies and processes work the way we expect them to, and have the desired effects.
In Canada we use what is called limit states design and it handles the sudden failure vs gradual failure already. Essentially if you want to use a member or a material that suddenly fails you must design it to fail one tenth as often. In practice engineers go even more overboard because nobody wants blood on their hands and gradual failure greatly decreases the odds of that from happening.
In terms of construction materials in condos failing (glass and piping) I actually put the blame for those two failures on the individual testing companies. Engineers in Canada don't understand statistics properly because most aren't taught materials testing properly in university. But we don't live in a utopia either. It is expected that things will go wrong sometimes and that we'll have to update our building codes to address those shortcomings. My issue with Canada is that we don't have a (or at least I'm not aware of there being a) structured materials testing code.
Sure we'll regulate materials and connections for our main structural elements, but there will always be that last mile where someone will want to do something weird (like build part of the building under railway or if someone over bent some reinforcement bar and you need to authorize them rolling it back) and you're pretty much out on your own once you come to those scenarios. It's doubly unfortunate because most of the engineers that just look things up in tables are basically helpless because they don't remember or use the basics day-to-day. So they end up being extremely conservative - unnecessarily wasting money and material for all of us.
I don't practice in Canada, but my understanding was that the CSA is an analogue to the ASTM codes here in America (and internationally apparently, if you believe their name change). ASTM codes are very thorough when it comes to materials testing. I believe Europe has a similar standard.
I don't understand how your third paragraph's thrust follows from your second paragraph - what does materials testing have to do with site specific (railway) or field changes (bent rebar)?
How long did you practice in Canada? Your viewpoint of engineers meshes well with the opinion that I've heard from a lot of junior level engineers who are just making the adjustment to a mid-level position but are still interacting with the lower level staff who are, as you say, typically helpless. They are supposed to be - they are still learning.
Only about 3 years before I got fed up and left. I'll fully admit I didn't get that involved with materials testing; and perhaps the firm I was with was substandard in this regard; but I don't think so. When I looked into the falling glass in Toronto I learned that they only tested a very small number of fasteners. I don't recall the number, but it was trivial statistics to prove that for the number of glass panes going up in Toronto they didn't have a large enough "n".
The two examples I gave were two examples that I dealt with personally. I was extremely dismayed at the rigour the firm I was at used. To test the bent rebar I think we used a sample size of 6 and then tested to failure. For the railways example they just used the weight of the train. Then when I reviewed the designs and brought up that the train could apply the breaks and thereby increase the downward force they just multiplied everything by 2.
In my experience the low level staff was useless, with a couple people that knew what they were doing. The medium level staff had two groups of people, the people that still knew advanced math and the people that got good at AutoCAD, and the senior people, while good at sales or general guidance; had basically completely forgotten all but the most basic structural engineering principles. I've literally had to explain crushing vs bending moment to a 20 year structural engineer before. I've (accidentally) designed a structure that was already designed by a senior person that forgot to put it in the tracking system. I used one sixth the steel and mine could handle more load.
I will grant you, however, that I may have just been at a substandard firm. We had some large projects, but we weren't designing new skyscrapers or mega-structures.
For someone who is self-admittedly not knowledgeable about the testing requirements, you seem very certain of your conclusions about this Toronto falling glass problem. Testing of components is usually by the manufacturer and it is their responsibility to provide a product that meets the requirements of the design. This is not a problem from the design side and is very difficult to prevent without the engineer being onerous with his requirements to a point that no engineer is really willing to go to.
I'm not familiar with the specifics of your rebar example so I can't comment. Your example with the train makes no sense - the design loading for railway is codified in the design manual (AREMA in the USA) and includes dynamic forces. Braking forces are applied longitudinally to the track so unless you were in a curve there is no downward force. I find it hard to believe that your boss agreed with a fictitious force and then just multiplied everything by 2 to get around it.
Your opinion on your colleagues is concerning to me and is probably more indicative of your lack of experience than the other staff's incompetence. Your experience reads like someone suffering from 200th hour syndrome, I wouldn't be surprised that if you stuck with it another 3 years you would have realized your initial impressions were way off base. At worst, it sounds like you may have been working at a firm that did commodity work and didn't attract top tier talent. If you are as good as you seem to think you are then you should have jumped ship when you got "fed up".
I don't intend for this post to sound dismissive but it will probably come off that way.
As an aside, knowledge of advanced math is not necessary for structural engineering in my opinion, nor is it common.
"Braking forces are applied longitudinally to the track so unless you were in a curve there is no downward force."
Why is that? Is it because the cars behind are pulling on it and keeping the usual forward weight transfer from happening?
Think of a motorcycle doing a "stoppie" i.e. read wheel is in the air under braking, all weight is on front wheel.
This is hard to describe without being pedantic and without being able to draw but I will attempt.
A motorcycle performing a stoppie experiences rotation because the inertial force couples with the braking force to create a moment about the front axle of the bike (this isn't technically correct language but you get the gist). While this idea holds true for the train, we have to take into account the differences in mass and contact between the two systems. Train cars typically ride on more than 2 axles and this provides stability from front or rear tipping. Train cars are also typically very heavy meaning that the braking force is not sufficient to 1) move the center of forces of the system ahead of the front axles and 2) tip the car. Increases in load because of this are, therefore, not sufficient to double the load on the front axle as you would see with a stoppie.
In general I agree with the idea that is put forth; however, it is important to note that what we are discussing is the BRAKING force. The inertial forces that result in differential axle loads is not a braking force (certainly, it is a result of braking in this case but this force also exists when a train begins pulling). These loads are DYNAMIC loads and are already considered in another part of the analysis. Dynamic loads also include consideration for bumps, etc. Because of this, the code is explicit that braking forces are applied only longitudinal to the track so that the forces are not counted twice.
Thanks for responding, and I'm very happy that there are people like you out there; but trust me when I say that despite my poor recollection from the time I practiced, my fundamental point is not wrong. Most engineers I've worked with in Canada are not to be trusted with advanced design. If you disagree I'd like to really talk to you about it because I felt like I was surrounded by people that had no idea what was going on and I would really like to be proven otherwise.
I'm not competent to explain why Tacoma Narrows failed, but it obviously wasn't up to design load. The existence of large sheets of glass falling off buildings and endangering people from multiple different installations strongly indicates that someone botched something.
The 200th hour syndrome refers to when a pilot has reached their 200th hour in the air and their confidence in their abilities is enough that they begin to get careless:
"Enough experience to be confident, enough to screw up real good." is how a nice TV show put it.
You're right, one reason engineers stay conservative is if the new ideas have problems that show up later, the engineer gets the blame. Hence he sticks with "nobody ever got fired for buying IBM".
For example, when my house was built I wanted something better than fiberglas insulation. After research, I settled on icynene foam. The contractor refused to use it, because he'd be to blame if it went wrong, it would be an enormously expensive retrofit. He finally agreed after I formally agreed to accept the risk and not blame him.
15 years in, and the icynene has been great. No troubles at all.
I don't really understand why you left. I would have loved to employ someone like you back when I needed a structural engineer. You had an opportunity to make a lot of money by shining head and shoulders above your competition.
The only problem I see is a marketing issue - being able to get the message out to your customers of the advantages of going with your firm.
The City of Toronto blacklisted us because our engineering fees were over 25% of the construction cost. If you're still in engineering and want to meet send me an email. I think there is loads of opportunity to make building buildings more efficient and I would love to combine my undergrad with computer science.
"...which of course lead to our engineering fees looking like a larger portion of the job."
So there you go. Mismeasurement is 99% of all business problems. I've been laid off before because I didn't generate the (apparently) required level of software defects and missed a gate or two by a day or two doing it. I was actually told I didn't look busy enough.
If most are unaware of it, is it then good practice (aka insurance) for you to cite the relevant codes in your plans, in order to avoid trouble by inspectors due to the differences, or is it safe to expect an inspector to know the codes A-Z?
Inspectors do not typically check plans and calculations for adherence to the code, they check whether the work matches the plans. Checks for whether the plans match the code are generally done by the reviewing agency, to varying degrees of effort. For example, an agency will check for adherence to their design manuals (which are the governing code for their work): Caltrans against their manuals, railroads against AREMA, building departments against the building code, etc.
And yes, it is good practice to cite the code as appropriate but it isn't necessary - any questions by the agency will be sent as comments and approval will not be granted until they are satisfied.
You've identified a fantastic application of AI, when the technology gets there. A machine can read all the codes with better retention and would have the patience to make these optimizations. A human in the loop could offload some of the judgement-intensive aspects.
Why don't you have that AI read all manuals not just construction ones. Also all legal texts, all medical texts and all computer texts. It could optimize everything!
> The Electric Monk was a labour-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.
You joke, but that is a great idea. IBMs Watson was helpful to doctors by being able to reference huge amounts of medical literature. Basically a domain specific natural language search engine. I've read about similar software for lawyers searching for relevant laws and cases.
The AI doesn't need to be so good it can replace the doctor/engineer/lawyer. It just needs to be a helpful tool for finding relevant documents. NLP and QA is just starting to get good and available to the public, so I think this will be a big thing soon.
That's one problem with government regulations. Even when they're actually well designed and impartial, they don't get updated when technology changes.
After a few decades, the once great regulations are often way out of date.
Well yea but they have to exist because of a failure in the free market. Without the regulations the free market doesn't have a good way to prevent cheap unsafe construction, especially when the problems aren't evident for a couple decades.
There might be alternatives. For example, requiring large and excessive liability insurance. The insurance companies then have a huge incentive to make sure buildings are safe and last long term. But the laws about what building materials you can use aren't set in stone, and different companies can have different rules.
There are multiple problems. The biggest is that corrections only kick in only after a failure.
Suppose a oil tank farm is incorrectly built, a tank breaks, and the oil gets into the nearby river, killing fish and preventing it from being used as drinking waster for 100 miles downstream for the next week.
Who gets to sue if the insurance company decides to not pay? How much does each lawsuit cost just in lawyer time? How long does it take to get through the legal system?
If the insurance company doesn't have the funds, what happens? This can easily happen because the company has every incentive to find the least expensive insurance company, or the company might be overstretched, like Lloyd's in the 1980s and 1990s with the asbestos, pollution and health hazard suits.
Or it can be more perverse, like an insurance company set up as a front, working with unsafe companies, and where a significant problem or payout simply triggers bankruptcy, and no remuneration.
If different companies have different rules, how easy will it be to switch insurance companies? Because that sounds like a great way to get lock-in. Can I bring my own building inspector in or do I need to depend on the insurance company inspector? Will the codes be public information?
The insurance companies themselves would have to be regulated. This moves regulation up one meta level, and makes it a simple matter of verifying they have the assets necessary to pay out. As opposed to having the regulators figure out what building materials are good, and every other detail about the construction industry.
Insurance companies are usually themselves insured. If they go bankrupt, e.g. a natural disaster or something unexpected happens, another insurance company has to cover it. This is possible because they insure multiple industries and geographic regions, so can take a few hits.
The details will need to be worked out, but I don't see why it would be anywhere near as complicated as working out the details of building codes. Regulations are complicated, we already accept that. This is just a way to significantly simplify it.
Figuring out whether the insurance companies have the assets necessary to pay out is never a simple matter. They don't have the assets to pay out if every policy is fully claimed at once, almost by definition, which means that it's a complicated matter of assessing how many policies are likely to be claimed at once, which requires figuring out details like whether the construction industry is making widespread use of a potentially dangerous material. Insuring the insurers can only help so much; Lloyd's was the insurers' insurer of last resort, and as
dalke points out asbestos claims wreaked havoc on them.
Or when Hurricane Andrew hit Florida in '92. 11 insurance companies went bankrupt after 600,000 insurance claims were filed. Almost 1 million people lost insurance coverage.
One of the results was a stricter and statewide building code instead of piecemeal codes. Successive hurricanes helped give evidence for the usefulness of the new code.
Another was state trust fund to ensure sufficient insurance capacity.
You could have both regulation paths: Either you build according to code, or you build whatever you want so long as it has plenty of liability insurance.
As a side note, I really wish more of these sorts of alternate regulations existed. They would be very useful fallbacks for when regulations become outdated or overly restrictive. Another concrete example of this is automobiles. Right now in the US, you can't bring a consumer car to market without extensive crash testing from the NHTSA and fuel economy tests from the EPA. These high fixed costs eliminate enthusiast and niche manufacturers. If the law said, "Any model that doesn't pass these tests incurs a $10k (or 25% or whatever is onerous enough) tax on each vehicle.", it would allow for new manufacturers to enter the market with far less capital.
Actually a new model is coming in construction where the contractor would be responsible for maintenance for a couple of decades. Sounds like a win/win - this is obviously guaranteed income for the contractor unless they screw up something in the construction phase. Someone can probably remind me what this system is actually called.
Does the International Building Code that many jurisdictions in the US use help here at all? It's revised every 3 years. But I could also see it being overly conservative and holding back local innovation.
As I understand the parent it's not that the regulations are outdated but rather engineering firms are unwilling to take advantage of all that the regulations allow.
I'm not in construction or an engineer but when I was watching Mike Holmes he said the "code" (building code) really means minimum building code. I had never really thought of it like that before it was a good point why strive for minimum?
You can build better there's no reason you can't (cost obviously) but most people just aim to barely pass minimum building code.
That's a bit misleading. Yes, new construction must at least meet the building code. But it's not like it's been designed to be just enough. There's a heavy safety factor in the codes. In some cases they work to reduce the code because it's too much.
"This lead to my buildings being cheaper / easier to build, which of course lead to our engineering fees looking like a larger portion of the job."
I propose you could start a consulting gig and maybe your own firm. Can't be that there aren't people who want their buildings to be cheaper for the same quality.
For whatever reason, people in general prefer to pay for things, not knowledge. It's much easier to ask for $10K more of materials than it is for $8K more of consulting, even if the latter is actually 20% cheaper. Human nature.
This article has many problems. Most importantly, building techniques such as "steel frame", "traditional bricks and mortar", "mud brick" and "rammed earth" are far less capable than reinforced concrete. The article implies that as these are more "durable" they are superior to reinforced concrete. This is a false equivalence of staggering magnitude. Reinforced concrete is the great enabler of modern high-rise construction and civil engineering; most projects simply would not be able to be built without reinforced concrete. Without reinforced concrete the world would be a very different place.
I take special issue with the article's use of pseudoscientic false analogies.
> This means that concrete structures, for all their stone-like superficial qualities, are actually made of the skeletons of sea creatures ground up with rock. It takes millions upon millions of years for these sea creatures to live, die and form into limestone. This timescale contrasts starkly with the life spans of contemporary buildings.
This is utter drivel.
There is a valid point in that for smaller scale constructions other techniques may be applicable which are otherwise ignored; also that there are alternatives to steel as the reinforcing material, both for prestressed structures and not.
I read an engineering guideline on concrete that I wish I could find. It compared the economics of building a steel vs reinforced concrete bridge. Steel came out better when you considered the lifetime and especially the recyclability, but concrete was cheaper to build initially. The steel bridge would last indefinitely, as long as you kept it painted, or even replaced rusted parts.
Reinforced concrete is costly to demolish, and nearly impossible to recycle. Crushed concrete can be used as a filler, but in order to be used for fresh concrete it must be free of contaminates, which is rarely the case, and no one wants to risk the integrity of a new structure of any importance.
The percentage of all concrete structures even built, that are still standing must be quite large. This will not be a great legacy to leave future generations.
We're not running out of limestone anytime soon, but otherwise this is a bit off-target. All of the alternative construction techniques you mention have advantages over reinforced concrete. Concrete also has its advantages, but TFA is not wrong about the disadvantages.
When I traveled to Philippines earlier this year I was struck by how they use masonry in situations that in USA would be reinforced concrete. Granted, the blocks are all CMUs, but the technique is masonry. I think it's because very few roads (at least in the places I traveled) would be suitable for standard 6-yard concrete trucks, whereas you can always throw a few dozen blocks on the back of a motorcycle. Of course labor costs are also a factor. However, concrete in block form is totally recyclable, while as TFA notes when poured it is not.
There was an article on HN some time ago on how we are running out of the type of sand necessary for concrete. There is apparently a big black market for sand stolen from rivers and beaches, leading to violence and environmental problems.
I guess that probably happens some places. I haven't seen it though, and I recently tore down a block building for a friend who has since built a foundation with the blocks. I don't think concrete would be a great material in an earthquake in any configuration. If you have to mix up concrete for pouring anyway, I'm not sure why you'd build a wall first when you could just place some forms instead, which would take much less time.
Yeah, if you're in an earthquake zone, like the Pacific Ring of Fire (https://en.wikipedia.org/wiki/Ring_of_Fire), which most certainly includes California and the Philippines, you're eventually going to be very unhappy if you don't reinforce them.
I'm a civil engineer. This is bullshit. Reinforced concrete uses much less concrete because, well, you have rebar to take care of tensile stresses and concrete does well with compression so it's much more efficient, which is basic. Also, and a very important point, reinforced concrete (in general) tends to fail in non-catastrophic ways making it safer to use and easier to spot conceptual errors in the project and building process. Reinforced concrete can also be recycled, the concrete becomes structural blocks (I even worked with these before) and the rebar is steel so thats easily recycleable too. In the end, it's cheap and affordable so you can build much more with reinforced concrete than with concrete reinforced with carbon fiber which would last forever but would cost a fortune (this can also be used to reinforce reinforced concrete...) making housing unaffordable to a large part of the world. Do you also really want to spend that much more to make a project to last 500 years without using reinforced concrete? You know that goes into the equation when engineers project strucures right? Oh well, clickbait.
The point is that longer-lasting structures should be cheaper, but because we don't factor in environmental harm and lifecycle cost into the price of things we end up with cheap buildings that exist to generate ROI ASAP.
As an aside, fiberglass (similar to carbon fiber) is used frequently (but still not much, relative to steel) in many applications. We use it extensively in underground applications.
The problem with fiberglass reinforcement is that it does not undergo ductile failure like steel does. Steel will yield and, in addition, has strain hardening behavior. Fiberglass just fractures and that's that. Extra precautions must be taken when using fiberglass in failure critical members.
Steel and concrete also have similar moduli of thermal expansion. This means that as the temperature fluctuates, there is minimal internal stress owing to the similar strains.
Why not just use straw, instead of carbon fiber, in concrete? Straw is comparable to steel. Chopped straw is used for stucco, but it can be used in reinforced concrete too.
It is hard to calculate amount of straw, which is necessary to reinforce concrete, because it strength varies, but it cheap, so just triple amount of straw.
Straw could probably be used effectively for lightly loaded structures (slabs, bearing walls in nonseismic areas) the same way that fiber is used currently. It is essentially for crack control and provides nominal flexural capacity which is generally hard to quantify but there are some formulas which are accepted.
Using straw for anything that is loaded in flexure will be a disaster. Reinforced concrete theory relies on the reinforcing to act as a crack stopping mechanism which will yield in a ductile manner. Straw's variable strength and inability to place it at critical sections means that it cannot perform the function of reinforcement as it will be possible to encounter localized weak reinforcement which will not prevent cracking and loss of section leading to progressive and sudden failure.
When straw/wood is enclosed in concrete with some lime, it does not rot. I saw video[1] of remains of houses built by German prisoners in Siberia using "soft concrete" - concrete with wood chips (cement bonded particle board, AKA Arbolite, fiber reinforce concrete, Papercrete, etc.). They are looking good after about half of century without any maintenance of houses, even in broken walls without roof.
From my own experience, I saw that wood rots quickly for about 1cm (1/2") when it contacts with concrete or cement stucco, but remains intact when enclosed in cement-lime mix. IMHO, lime is important to save wood/straw from rotting.
People didn't have a different perspective back then. It's just survivorship bias. People built plenty of crappy disposable buildings 116 years ago, you just don't see them because they were crappy disposable buildings built 116 years ago.
Mine is just under 200 years old--also in the eastern US. It would be pretty silly though to view my house as this incredible structure that a farmer built 200 years ago to last for the ages. The fieldstone foundation is original as are various posts and beams. But the house has been expanded, rebuilt, updated, etc. in all manner of ways since it was built.
This article makes an interesting comparison to ancient Roman concrete. While the Romans built a tremendous amount of infrastructure in concrete, survivorship bias means that the few bits that have hung around are seen as some sort of superior quality to Roman understanding of concrete.
However, if you go to Rome, most of the surviving bits are millions upon millions of stacked and mortared bricks, and most of what has survived are uninteresting walls. [1] For some reason, our collective memory of Roman ruins is that they're all aged concrete or stone. But when in Rome, you end up seeing lots of this [2], which when built probably had a layer of facade material on it.
And it makes sense, stacked, weather resistant, often covered in a prettier facade, baked bricks should last more or less forever until the elements wear them down.
The widespread reintroduction of concrete as a building material really didn't happen until around the turn of the 20th century. And reinforced concrete didn't find widespread use until a few decades after that. Not particularly confusing, the first generations of buildings built with these fairly new and only partially understood materials are the buildings that the author is mostly writing about.
The real culmination of exposed, reinforced-concrete-everywhere, finally happened in the 50s with the advent of the eye cancer called brutalism. Today a tremendous number of brutalist buildings today are absolutely falling apart, and I blame that on a lack of understanding of how reinforced concrete should be used and the availability of more modern materials and perhaps an overenthusiasm and misuse of materials where they shouldn't have been.
But still, if we fast forward a thousand years, there's bound to be a percentage of those structures still around and survivorship bias in the future will lead some to speculate that the engineers of the 20th century were geniuses unrivaled by any in the future.
Roman concrete holds up against the erosive properties of seawater better than the materials used today. Scientists discovered it's because they used volcanic ash to make their concrete.
That's part of it, but mainly roman concrete holds up much better because they never used reinforcing rebar. Volcanic ash improves longevity further, but if they had included rebar (volcanic ash or no) all those structures would have rotted away a long time ago.
More realisticly zero steel reinforced concrete structures will survive, but other types of buildings might. The house I grew up in for example is pushing 300 years old. That's still a long way to go, but the older it gets the more likely it is to be preserved. A 90% chance it survives any given 50 years = 2-3% chance it makes 1900.
Just an aside, the Colosseum, long held up as an example of superior Roman concrete construction, is mostly brick as well...which I found infinitely surprising when I finally had a chance to visit it. [1]
Here's a walk around the structure [2], it's virtually all brick.
The very first sentence of this article is a giant red flag, the comparison between the Pantheon in Rome and modern concrete is deeply misleading -- Roman concrete was a different material than modern Portland-cement/sand/gravel scheme, it had better resistance to cracking and could set under water. [1]
The rest of the article seems to be, if you think on different time-scales and use different cost-benefit criteria (e.g. include or exclude environmental effects), you get different answers about the suitability of various materials. This is indisputably true.
Modern concrete can set underwater, and, at least for concrete where you care about the results, you want to keep the surface wet until the concrete cures appropriately.
(IANAPE, but I took a grad level concrete course in school)
addition/edit: Quick primer. When concrete cures, it's a chemical process that converts free water into an electrostatic gel in the cement crystals. That interlocks the small and large aggregate to make a solid. If concrete dries rather than cures, then that gel doesn't form and you don't get the gel holding it together. If you heat up cured concrete enough, you'll drive out the water and make it a powder again.
If you cure something under water, it can technically continue to cure for a very long time. Normally you keep it moist for 24-48 hours, and standard testing is done at 28 days. That will get you something like 90% of the final strength, if it can continue to cure indefinitely. I've tested concrete that was semi-submerged for 40 years where the design strength was 4ksi, and it tested out at 14ksi.
I see a lot of engineers calling the article FUD and BS, what with it mentioning ancient Roman buildings and mud bricks.
I also see several commenters (myself included) wholeheartedly agreeing with the point it makes.
I think bluthru hit the nail on the head somewhere below (which will soon be above?)
> The point is that longer-lasting structures should be cheaper, but because we don't factor in environmental harm and lifecycle cost into the price of things we end up with cheap buildings that exist to generate ROI ASAP.
Factoring in those kinds of costs runs contrary to mainstream economic doctrine, so the question really is whether you think that capitalism (in its current form) is doing more harm than good for our communities and/or for our species as a whole, especially including future generations.
An alternative to steel for concrete reinforcement is glass blown basalt [0], which struck my interest via its use in a project for free-floating, (geopolymer) concrete seasteading vessels [1].
That's really interesting. There are a lot of advantages to this kind of rebar apparently.
The website also says:
> Basalt rebar does not conduct electricity
Which got me thinking about electrical codes. In some cases, a concrete building's electrical wiring system uses the rebar in the concrete structure as an electrical ground.
But, I guess for buildings constructed with basalt rebar, conductive copper rods would have to be specifically built in separately, for purposes of electrical grounding.
I am afraid I don't know specifics. For a while the only suppliers were non-American (primarily Russian), so the market prices were not equitably comparable. /r/floathouse (link [1]) will have up to date information.
They claim it's better and cheaper than steel rebar. No word about disadvantages, which I assume exist (edit: found, see below).
Edit. Oh, they carry a price: http://nano-sk.ru/price-list/ -- 1m of d=14mm rebar costs ~$0.60. But I don't know the price for the regular steel rebar, though :)
The depressing counterpoint, I suppose, is that 99% of the time, you're not building the Pantheon, you're building something that was intended to be minimally-acceptable, utilitarian, and disposable from the get-go: a parking garage, a tilt-up big-box building, a freeway on-ramp, a strip mall. All of which may well be obsolete in a few decades just because the urban environment changes rapidly.
If you could figure out a systematic way to cut the cost and the lifetime of such lowbrow, mass-produced concrete structures in half, developers wouldn't hesitate, they'd jump on it immediately.
Most of the big-box buildings I've seen in UK recently are steel frame/breeze block/metal cladding structures on a poured concrete base. The idea being you can basically unbolt the walls when you want to take it down.
I don't see expendability necessarily as a downside - if the system would support lifetime learning of experts. Building and rebuilding the same thing again and again should be a good way to learn how to make it as good as it can be from design and construction point of view. The problem is if the system does not cherish and support this opportunity to learning and mastery - which many for-profit commercial establishments sadly don't.
This article is full of FUD. What is the alternative? All these reinforced concrete problems are well understood and studied. if proper construction and design codes and maintenance guidelines are followed these structures can last a very long time.
In europe we have EuroCodes that account for this problems and to my knowledge concrete cancer is not related with steel corrosion but with a long term chemical reaction between some aggregates and cement. Remember that concrete is cement with sand and stones and the hardning chemical reactions are complex and can last for decades.
The only salient point this article mentions (but only briefly) is that concrete production generates a huge amount of CO2. Everything else is hogwash. In terms of strength, versatility, and cost, steel-reinforced concrete has proven to be the greatest building material humans have ever devised. It is by no means perfect, but nothing is. Concrete needs to be maintained just like anything else. With neglect it decays.
I'm a fan of brutalism, so I do like concrete buildings and find them interesting, but there's also a good case to be made about mud bricks, which the author aludes to. Yes, you'll have issues building 10-storey high buildings out of mud-bricks, but it's definitely possible to build structures (like barns) out of them that can easily last more than 100 years, without the builder having to have a civil engineering degree. I've seen one such structure with my one eyes, built by my grand-grand-father who was a peasant. The only thing you have to be careful about when building stuff out of earth is water infiltrations, otherwise you're good.
Buildings made out of earth are also better heat insulators compared to concrete. My grandma's house was always cool in the summer, while my appartment which is part of a building made out of concrete wouldn't be livable in the summer without AC.
That's an apples to oranges comparison, though. Mud bricks are a good building material, and so are a lot of other things (e.g. wood, bamboo, paper), when the structure only has to support itself and a roof. But concrete enables structures that cannot (or should not) be built with those materials.
Mud/clay walls are OK to support roof or act as insulator, but they are weak when you need to attach something to wall. Moreover, outer layers can laminate out of wall if they are stronger than wall itself, so you will need to use lime stucco to paint wall, which is soft enough.
To solve these problems with mud/clay walls, wood and straw are used. To date, G.R.E.B.[1] technology is most advanced, cheap, and easy to use.
Ongoing maintenance is an important energy and money sink for any society. I find it doubtful that concrete (at least, the modern versions and usage of it) will be sustainable beyond the fossil fuel era, simply because it will be too expensive to maintain.
EROEI for a given bit of infrastructure (ie how much energy is conserved or produced for a given energy expenditure in construction/maintenance) isn't as sexy, but it's just as much a thermodynamic constraint on sustainability as overusing the CO2 capacity of the oceans and atmosphere.
0) Build using appropriate materials/design/construction. It's mostly a solved problem, it just takes wanting to spend enough money to do the process right.
1) Use epoxy coated rebar. It's commonly used in places where there's a large risk of cracking and rebar corrosion (e.g. roadways)
2) Use prestressing/posttensioning to put an overall compressive force on the concrete to prevent cracking.
Small amounts of damage due to rust can be patched up with new concrete. Whether this involves welding in new elements of steel is a question I don't know the answer to.
The real question is do we want buildings to survive 200 years? In 200 years building technology might have advanced far enough that everything is carbon fiber concrete or something far superior. People look at degradation of a structure as some kind of serious issue, but it can also be seen as a positive.
To expand on this, will our descendants have the same demands in 200 years? Will a revolution in transportation dramatically change infrastructure needs? What will employment look like, how will businesses operate? Will citizens want the structures we build in the locations we build them?
The author suggests we build structures to stand the test of time and addresses structural, economic, and environmental needs, but not societal ones. The Moai referenced are not artifacts because they weren't engineered well enough, they're artifacts because they outlived the societies that built them.
Surprised they didn't mention carbonatation and climate chance. tl;dr rising partial pressure of CO2 in the air leads to a spontaneous "reverse calcination" process, lowering the pH which leads to rebar failure. By 2050 most reinforced concrete buildings will be effected.
I haven't been a practicing engineer since 2007 but if my memory serves me well, to avoid problems of corrosion one often used method was to increase the cover [0](distance from concrete surface to top of steel reinforcement using spacers). I can't see it mentioned anywhere in the article, will probably have to re read it...
As both the partial pressure of CO2 in the air and the concrete temperature rises due to climate change, the depth of carbonatation (the process that lowers pH and initiates rebar corrosion) is expected to increase by 45% by 2100 under the A1FI "business as usual" emission scenario. So expect to hear more about this problem in the future.
At least in California, we don't use it because brick buildings crumble in earthquakes. http://www.latimes.com/local/lanow/la-me-ln-quake-safety-tho... In new construction, you can use it as a facade, but not for anything structural (to oversimplify a bit.)
Elsewhere in the US, brick would probably be a good choice, but it's more expensive than slapping together another bargain-basement wood-frame house. On the other hand, for some reason, Europe has a lot of brick buildings and even still puts in brick streets sometimes.
The high labor costs also resulted in brick masonry becoming a basically lost art. There was a building a few blocks away from where I used to live, where a car crash had led to a catastrophic collapse of a circular structural brick tower. It was an eye sore for years and years. The owners apparently couldn't find the skills to repair it for a long. The repair looks great but doesn't perfectly match.
Even with the hassle, structural brick buildings are great.
Where I live there's a huge number of apartment blocks, most of them built in 60s and onwards, and the apartments are the sole place to live and the prime asset for most people.
They'll definitely start crumbling in a few decades and not many people will be able to afford a new home as well as deprecation of their main asset. Have no idea how this might ever resolve, frankly.
The problem with structural engineering is incentives. It is one of the reasons that I left. Most structural engineering companies are filled with conservative, boring engineers that prefer to look up pre-designed segments and don't make full use of the steel design handbook or building codes.
For example, in Canada if you have a non-load bearing brick outer face (most brick buildings in Canada) you're allowed to reduce the wind load by 10%. I was the only person I knew that knew this because I actually read the steel, concrete, and wood design handbooks front to back while I made notes. Furthermore almost nobody has read the building code "just because" they might hop to a section here or there when they need it, but they're generally not going to just sit down and read the thing.
So when I would design buildings I would be able to take advantage of a lot more things than most people. This lead to my buildings being cheaper / easier to build, which of course lead to our engineering fees looking like a larger portion of the job.
The problem with reinforced concrete is the same. Engineers have no financial incentive to make alterations to their designs to make the buildings last longer. It is almost trivial to make sure steel wont rust (or to double or triple a buildings life) but it makes construction costs go up 0.01% and makes engineerings fees go up 0.1% so nobody does it. Regulators are to blame too. There are amazing concretes (Ultra High Performance Concretes) we should be using in our buildings that completely lack even needing steel because they are so ductile and strong (MPa 200 for the one I was familiar with, Ductal by Lafarge), but it's impossible to use in construction in Canada because the code is so rigid.