Hacker News new | past | comments | ask | show | jobs | submit login
CAD Is a Lie: Generative Design to the Rescue (lineshapespace.com)
58 points by MichaelAO on Jan 14, 2016 | hide | past | web | favorite | 62 comments



Note: This is an Autodesk blog marketing an Autodesk product.

The funny thing is it leaves parametric modeling completely out of the discussion. That is, modeling based on pre-designed components that the end user then configures and lays out per the design intent.

This amplifies the productivity of a person doing the modeling quite a bit.

Granted, this works only in areas with fairly well understood problem and solution spaces and is a bit different than the approach described in the article.

Just to point out it's not a review article in general, but a marketing piece.


Yeah, but the vision of the article is bob on. This is how things are going to be (source: usually being right)


Of course it's right. But will it be right in 2 years or 25?


>> That white thing is a Frisbee, but it’s only a little line. How the heck did it know?

There's a joke, it goes like this: A physicist, an engineer and a guru were asked what, in their opinion, is the greatest miracle in nature.

The physicist said it was quantum entanglement.

The engineer said he most admired the way human hands are articulated.

The guru said "the thermos".

Why the thermos, the guru was asked, of all things in nature?

"It keeps warm things warm and cold things cold", the guru said. "That little bottle! How does it _know_?"


Well yes and no.

This tech is SUPER cool. It can find strength / weight combinations that are very difficult to arrive at via the standard analytic means. That's worth a lot in some contexts.

However, these structures do not map well to existing manufacturing. They do map extremely well to additive manufacturing, of which ordinary 3D printing is a core component.

CAD continues to segment:

The very high end of CAD is serving the major vertical markets, and typically what happens there is tech like this will get licensed, reverse engineered, duplicated, whatever and integrated into a greater whole ecosystem that goes well beyond geometry create.

Middle, or ordinary CAD, is being used for all sorts of stuff, and people don't mind a bit longer workflow, or missing out on some features. They want an effective tool at a great price, and their access to manufacturing tends to be as mainstream as the CAD is.

On the low end, CAD is being used for maker / hobby tasks, and a whole lot of experimental type work. This tech is something to come out of that, and it's currently being supplied to the mid or mainstream CAD market by a vendor looking to move up and own more of the stack.

Whether that's successful or not remains to be seen. There is a lot of legacy data, operations, systems that just can't be ripped out. Users of those systems may well adopt this feature in whatever system it's present in, and simply move the resulting geometry to the larger system, perhaps automate doing that too, and move on.


The article briefly touches on the manufacturing issue: "lets designers describe the forces that act on an object and then lets computers go off and make it. These forces can be structural loads or even manufacturing methods."


It does, and not the scale of things, this is early, with really great advances to come. New manufacturing methods will help too. Additive is very well matched.

I'm watching this closely. Additive, weaving, composites, nano scale, all are converging into a space that looks a lot more organic than what we do for the most part today.


Agreed. But since one of the goals that can be provided to a system is ease of manufacturing, I expect we'll see a lot of blobjects that are optimized for cheapness as much as elegance, strength, etc. - likely more, in fact.


AIs may routinely design things in the future, but it'll be a while. Before that happens, people will be doing CAD in VR. VR is going to be a much bigger force for change in the CAD industry than AI for the next few years.

I think anyone who has used an HTC Vive or Oculus Touch will agree that VR with tracked controllers is an order of magnitude improvement over monitor+mouse+keyboard for both viewing and creating three dimensional objects.


I’m quite curious myself. Although, working at a place where we have 3D video walls with head tracking for quite some time you still can call me a sceptic.

We only use them for presentations, the reason imo is that mechanical CAD modeling is really not just about creating 3D representations. An engineer works on math, tables (excel is big in engineering), enterprise resource management systems (you look up available material, prices, etc.), PIM software and of course the browser. You switch between them a lot. As far I can tell, the resolution of both Oculus and HTC Vive isn’t suited for editing larger texts or tables efficiently.

Time (or better the VR industry) will solve this issue, but even then I’d argue many engineers prefer hand drawn sketches to support their thinking or for communicating ideas with their colleagues.

While I’m sceptic concerning the use of VR for every day engineering, I believe it will be fantastic to let non-engineers explore designs. Like a production worker analyzing a design for assembly friendliness, where he can take apart the design without learning the right mouse-keyboard interactions or get used to a 3d motion controllers. Which, from my experience, is a huge barrier to them.


I'm curious how VR is going to be accessed for modelling in the near future (around Oculus release date), are there programs already in development for the big modelling softwares, or will they be standalone programs?


I'm interested in the future of CAD. Here's a link to implicitCAD[0], a constraint-solving CAD written in Haskell. There's interesting discussion; apparently what the author was trying to was proven by academic research to be Very Hard. Research he hadn't yet read!

I've never done CAD, so I have nothing to contribute. I enjoyed the article!

[0] https://news.ycombinator.com/item?id=9248174


Constraint solvers and geometry kernels are big, very expensive, beasts of software. They have grown over the decades, bit by bit, research yielding new approaches, as well as empirical, "Hey, I couldn't make this." use cases contributing to a very smart body of code with millions of man hours in it.

Setting this tech aside, I would still classify CAD as an unsolved problem space. Even advanced kernels can and do fail, or generate erroneous geometry on what one would think is a common, or simple use case.

That said, we are getting really good at basic geometry now. Those basic cases are much fewer and far between with most users just seeing robust operation now. It's taken many years for this to happen, marking time at the dawn of affordable solid modeling CAD in the mid 90's.

Now, it's about getting computers to do what people can do, and that's infer intent as we do with our minds eye and analytic skill. It's also continuing to be about feature create, and that's all about how to boil down the UX of shape and intent description to the system. Also quite challenging.

This kind of thing, as a branch in the "what's possible with CAD?" tree is pretty damn exciting! There are going to be a whole class of great use cases for this in the near future.

Bear in mind, old feature create means and methods almost never die. They get sidelined, used less, whatever, but they continue to exist because there continue to be use cases they are excellent for.

A lot of people think, "replace" when the real questions should be, "Add to, and what new things can we do?"


Big systems growing over decades are usually easy targets for drastic simplifications by building upon better abstractions.

There's a great talk by Alan Kay titled 'Is it really "Complex"? Or did we just make it "Complicated"?' [1].

He shows for example the Nile system where they replaced several 10k of vector-graphics rendering code with 450 lines. An example from a different context is the miniKanren / microKanren approach to logic programming. It's basically Prolog fitting on a single page [2].

If a similar simplification and unification could be accomplished for representations of (solid) 3D objects, that would be a great great result.

[1] https://www.youtube.com/watch?v=ubaX1Smg6pY [2] http://minikanren.org/


Over very long time periods, we may see progress here, but inertia is huge. Take Parasolid. It's a very good kernel. Between it, Granite and the Dassault kernel, we have most of the world's geometry being handled on those kernels. Probably billions of man hours at this point.

Interoperability is basically terrible.

If it were me, I would see efforts to interoperable and infer intent, features, etc... as the most productive task. Those kernels aren't going anywhere, nor are the geometry use cases getting easier.

A refactoring of all that is a very seriously expensive, time consuming task and still that body of data, intent, automation, etc... is there to deal with.

It's hard to even think about an economically viable case.

There are some trying with primitive, open kernels, and various takes on forks of the established ones.

One vendor is dividing the kernel into well defined pieces do that other software can operate in those gaps and do so efficiently.


CAD will never be a solved problem space because ultimately it's the wrong problem space.

The name of the blog shows why. The CAD problem space is an extension of "lines" as the design medium. It's implicitly premised on humans in the loop and creating human interpretible as a precursor to the artifact. There's an intermediate step that isn't strictly necessary and the CAD problem lives in that intermediate step.

To put it another way, if I want a widget and I have the GCode and a 3D printer I'm pretty much done. I've bypassed the CAD space and more importantly it doesn't matter whether the GCode was generated via a DWG at some point or a computer program generated it directly without human intervention after searching through a million images from the internet.

That kind of approach, however, is a problem for companies like Autodesk because without humans in the loop their product lines are obsolescent and Wall Street unhappy with management. The CAD industry depends on continuous rather than quantum change, because the obvious quantum change eliminates seats and with it per seat revenue models.


Sure, a simple, atomic thing, can make sense.

That all is a very far cry from the more sophisticated design and manufacturing efforts being done today. Not only is CAD a core part of that, it's not possible to do the work without it. So many interconnected tasks...

The 3D printer G code case is a good one for replacement, or concepts, or even simple things. But, we never really needed CAD for those anyway. I used to just design in gcode, for example.

Computer generated models are a very long way from practical realities today. That branch of tech is seeing increasing use in movies, games, and civil engineering. Handling more complex, or precision geometry is crude and largely unusable in so many contexts today.

I think it is really important to realize manufacturing tech never goes away. This will include CAD in its various forms too. Once we know how to apply a technology, it continues as each tech so far has sweet spot maximums that pay off at scale, or in niches.

Parametric CAD, coupled with an API and other code to specify shapes is being done and done well. Displacing that with something would require we solve a class of engineering problems that we are no where near solving. Professional engineers will continue to be relevant for a very seriously long time.

CAD is about a lot more than just expressing and reproducing geometry.


You're talking about a very simple widget. Autodesk is used by a huge number of companies for simple 2D drawings and complex systems.

You can't give your client a back of the napkin, hand sketched drawing in every case. You won't get your GCode without being able to see a drawing of what you're trying to manufacture. CAD beats out hand drawn sketches in almost every situation when it comes to manufacturing.


When a computer can generate instructions to automatically fabricate the artifact, there is no client. There's simply a user. Autodesk's business is selling to people whose business requires the expert-client relationship.

That step is as necessary as an 80 column punch card entry team.


I don't see that day coming in my lifetime. There are simply too many design variables. You can't engineer a totally new product with AI. It's one thing to make improvements to an existing design.


Most products aren't totally new. Some things that are currently designed, such as buildings, are relatively simple and largely built the same way they were decades and centuries ago. Same is true for furniture, hand tools, cookware, cutlery, etc.

The thing is, the idea that there are many design variables is contrary to recent AI development. Instead of creating a bunch of rules and putting them in a specification, the machine just looks at what does and doesn't work and decides what might work.


BTW, existing, rule based CAD nails those cases.

Hell, I was showing people how to make parametric buildings in the mid 90s...

Those standard cases have been ripe for the picking, and it's been done for a long time now, usually with a combination CAD and rule system, and verify ways for humans to input various things.

There will continue to be nice gains and a simplification of CAD as well as the factoring it out on those. We have never needed CAD to do that stuff. CAD just helped do it better, more efficiently.

For the growing body of stuff we do need CAD to do, these approaches are nowhere near the maturity needed. Novel things do require people, and they must iterate, collaborate, analyse, etc... CAD is the best we've got.

When we get a real AI, that has real analytic and creative skill, maybe. Will be exciting too. Long way off though.


It's a question of who gets to "input various things". CAD systems whether or not they incorporate heuristics or parametrics are based on:

   human owner -> 
   human designer -> 
   production [possibly human] -> 
   artifact
However, the important high level abstraction is:

   human owner -> artifact
where the "->" simply accounts for the time gap that we call "production". Getting closer to the high level abstraction is why people pay for design. For many projects having humans in the "->" is seen as a cost not a benefit...they're forced to pay a designer for time spent rounding corners and gradienting drop shadows because those things are more fun and easier than iterating.

It's also more fun than writing up vast arrays of heuristics and parametric constraints.

Computers are really really good at iterating. With machine learning, they don't require long lists of heuristics and parametric constraints. Just a lot of examples and an unimaginably large number in human terms of CPU cycles.

That's not a place AutoDesk as a publicly traded company can pivot. So in the 1990's when you were developing computational approaches, they pivoted toward a product line architecture that allows Autodesk to roll out features at regular intervals. If it had wanted to go computational, it would have doubled down on AutoLisp with macros and promoted developers. There's a business reason that full blown AutoCad developer tools require approval and that there's no AutoCad app store.


We are a very long way from this.

You are correct in that rule based design works, but we don't really have AI yet. We have AI like systems, but they currently do not perform at the level needed to be a serious replacement.

Someday, maybe. It's going to be an awful long time.

For now, say the next 20 years, it's going to be about making CAD accessible to more people and integrating it with other systems. Right now, today, when that is done, the payoffs are huge, but so are the investments.


I've seen some real compelling stuff start to come out of CFD companies. Ie, taking a dozen vehicular dimension variables and solving for the best aerodynamic performance.


I'm advocating precedent based design, not rule based. The Acropolis masons didn't have rules, they had examples.

BTW, I have found it really hard to get out of the habit of thinking about the production of artifacts in terms of contemporary practice...to think of design as a place where middlemen lurk. But in reality, a homeowner's first choice is, to a first approximation, building sans plans or permits.


First, there are things like buildings and infrastructure, and there are mechanical parts, devices, etc...

By example could work for some things today. Often, template or start parts are the basis for those. And that's been going on, like you say, for some time. We don't actually need CAD to do that, though we do have a high communications burden without it. We could invest in those comms, dumbing down a lot of stuff too.

But, there is no way in hell that sweet spot can replace design and engineering. The liabilities alone keep it off the table, and the problem space is absolutely huge too.

As another person here said, "not in my lifetime."

You know, I worked with some people who build yachts. Lots of old world skill in that building. They can, if desired, just build a nice boat with nothing more than the tools and materials in the shop.

They bought CAD from me that day, and the owner expressed what you are here too.

Turns out, good design and engineering makes better boats in both the subjective sense and objective performance, physics sense. And communication allows for faster, less error prone boats too.

You are describing a distant, potential future. We have no where near the level of tech, not development as people needed to actualize that.

CAD is part of that journey. With it, advanced products, structures, etc... are all possible, but the work involved is still insane.

I've seen and been a part of the development of systems that do this, and even on trivial parts there is a whole lot to be accounted for.

The bars we have in place, that differentiate engineering from design, permits to build, geological studies and other various impact studies all regulate the building of things that people depend on and that they must trust.

We have those in place for really good reasons, and one of those is the fact that we just don't have mastery as a species. Not yet.

All I here is, "dang, it costs money to do it right", and "why can't I just copy my neighbor?"

The answer lies in the complexity of the world and the dynamics of the people here in it.

Nice vision, but good luck with that.

For now, I suspect the more productive answer is to continue making CAD smarter and more accessible to ordinary people. We don't yet live in a Lego brick world, and I suspect human creativity will drive that being the case for the forseeable future.


There's a great deal of cultural and economic history attached to yachts.

So lets look at web pages. An end user with no technical training in networking, graphics engines, operating systems etc. can generate sophisticated artifacts with minimal input from a computer. Sure, it's not a "yacht" but its functional.

I've been in AEC for almost 30 years. I've built, designed, and regulated buildings through three recessions. What has kept software from eating design is the economics of piece work. There's no way to scale except linearly and the cyclic nature of the industry puts a natural cap on that.

Typist used to be a career.


I've had that same dialog and result across a wide variety of diciplines. The yacht people came to mind from the comments here.

I'm not really going to equate Web pages in the way you are attempting. The parallels are far too coarse and fail to be inclusive enough to make for meaningful discussion.

The idea of, it's worked before, so do it again is a good idea. But there are a lot of limits to doing that safely that are just not being considered here.

In a vacuum, "I need this thing", or even, "this building" doing that can make sense. I know a guy who has applied roughly these kinds of ideas to things like parking garages. My experience is more electromechanical, but there are strong parallels.

I will agree we may be closer on the AEC side of things. Design can often be less of a factor, but there still is all the analysis and verification on significant structures. For minor league things, are we not mostly there? A lot can be done without CAD and that's fine.

I will argue effective CAD can still improve on many of those cases. Making CAD accessible still seems to me the better path, and that's due to the fact that we really can't responsibly ignore the process.

CAD is a whole lot more than making plans, or defining shapes. There is descriptive and analytic geometry, two cases made difficult to employ on some CAD systems that were simplified to maximize more common use cases. Engineers and designers, who understand how and why one would use geometric methods, pay easily for capable CAD software that delivers great returns to all involved.

I also helped with a FAB, where everyone involved wanted to improve on the reams of paper drawings needed to communicate the work. There also needed to be a means to communicate variances from the plan too. Stuff happens, and is it cheaper to go with it, or rip it out and do over?

I modeled the whole FAB, and output the result as a 3D view, measure, section, etc... dataset. The guys could go on site, locate themselves and then see the work, to scale, measure, etc... they could also send back differences and errors in 2D and get fresh models to work from.

And we took a TON of cost out of that one. Pre construction bids were much closer to realities, there was far less error, far less overall comms between everyone, etc... I've been on a few other projects that are similar and can't say much. There are a few people in the world out there who are applying a lot more of the mechanical CAD capabilities with great results.

Because that was done in CAD, I have a ton of options. It's a great template for another FAB. I could add rules and constraints and limits to enhance that too. I have, as an exercise and now know I could make an 80 percent fab start model... the dataset is proprietary, so no sharing, but recent advances in CAD API options as well as how and what parametrics and rules can be input bring a lot to the table.

Yeah, making prtery stuff is a distraction. Management and culture problem. We have an education problem too. The things I mention here are mature, work well, but just not taught to the degree they should be. There is huge industry inertia holding things back in the AEC space that does not exist in the same degree in, say mechanical and entertainment. Entertainment and games have few human impacts, and we do see computational means being applied. Some of that has trickled into mechanical too. More will come.

And, it's not so practical to express these things in a purely computational way. Honestly, the barrier to entry on that is even higher! If you dislike vendor pricing on CAD, the sticker shock on those kinds of datasets will be brutal. Open ones won't be trusted, they will need to be evaluated, and closed ones will work, be supported, liability compliant, etc...

Even the basic input to a computational type system will need to be a sort of CAD, due to the difficulty we have in communication geometry, sans visual means. Should that system result in something viable, nobody would ethically bill with it, until after a professional does an analysis.

You mentioned scaling... well, applying high end CAD to stuff like this, and doing it the way the mechanical people have done it for years can scale! It's the CAD that can bring the scale to the table. It's been going on for years and we ate at the point of digitally simulating entire vehicles, factories, etc... and that stuff brings us better, faster, less expensive things and the automation needed to make them too.

2D rats nest drawings are going away. Delivering robust 4d and 3d to people is pretty awesome and a smaller planning, design, compliance, engineering group can do the work of many people and across multiple projects too.

Honestly, I think a lot of what you are getting at exists, and CAD enables it. Human input can be distilled down to a Web page. Design rules, template parts, intent and other compute driven inferences take people out of the equation now too.

In the mechanical case, the ongoing changes continue to add a lot of value. It's not just a business model. If anything, the AEC side of things could learn a few things from the mechanical people, who by the way, do take liberally from appropriate AEC ways and means where they make sense.

Both teams could benefit from software ways and means, and I know of at least one new, free to use on your phone or computer cloud type CAD tool doing exactly that.

There is a really ugly gap between small CAD, low complexity, and major efforts involving a lot of integrated systems.

Design reuse in this space holds a lot of value potential. It absolutely will not displace high end, real CAD (Siemens, Dassault), but could very well dominate a lot of the market, should it end up out there, possible for mortals to use, etc...

On a vendor specific note, Auto desk finally sees this and is building some nice stuff. They may be like AMD, always a bit behind for failing to build when others did, but maybe not. We shall see over the next 10 years.

The position of drafter or CAD jockey is on the decline. CAD will eventually settle into something most people can and want to do.

Maybe reaching that point will also mean enough days and expertise being captured to take steps in the direction you find appealing. It's gonna be a bit of a wait...


The AEC side is limited by economics. An architect's business is one project at a time. Landing and completing a project only has indirect long term value as marketing.

Everything is a one off because every piece of land is unique if for no other reason.

Or more realistically, when the client doesn't pay but says, "but you've still got the design" it misses the fact that AEC designs have no value for some other site. It isn't that AEC is full of idiots that can learn from the world of product engineering. The circumstances are radically different because real-property is different from chattels and goods.

Anyway, relative to a multi-core CPU, buildings are dumb simple. And people in the loop makes the hard part of projects, managing and coordinating humans, worse. In particular because piece work means that success equates to a backlog. So each person has one and backlogs are not conducive to quick response to changes or commitment to deadlines.


Indeed.

However, it's my observation that a very significant economic gain is there to be had with a better, up front investment. Not just drawings and models. Intent.

It's entirely possible to package up a ton of stuff that can be used to assemble a project, and when that's done, those packages can drive all the info needed by the contractors each day, if desired. This technique was applied to a major airport we probably both know, and it was done on mechanical CAD for those reasons. The bid was more accurate, and the project ran much closer to on time and on materials.

Buildings are sort of dumb and sort of simple too. Of course, in the one time, forward create kind of way, sure. It's just a pile of simple geometry. On the other hand, if one actually wants to move up a level, add intent, build in design rule checks, etc... suddenly, you've got a class of buildings possible, those buildings largely derived from set pieces, with the real design effort being more centered on the unique aspects of a given site.

Couple this with a bid process that is driven off that same intent, and doing prospective designs starts to change. Not only do they get a realistic idea of the design outcome, but they get bids, materials, etc... that are closer to real, and derived from a team with a history of doing that.

I never classified anyone as idiots. I see it more like being too conservative. Happens in mechanical spaces too.

It's just process pieces that are laying around waiting for people to make better use of them. Many are fat 'n happy, so they don't. Others, particularly in the strategic project space, really are looking for better ways to get things done, and done on time and on budget.

Real property has it's own set of requirements to deal with, but in the end, they are just requirements. Employing more of what the tools can do really can have an impact. I've been a part of a couple of those, and the results were pretty impressive. Coordinating those humans, once you've got a dataset that is live, real time, able to iterate based on variances that just happen with property, doesn't look a whole lot different from that same process in the product space.

Manufacturing, and the product space in general, struggles with this same thing, and there have been significant improvements in overall communications in the last 10 years that are just now really starting to see wider use. All electronic shops are here now. 10 years ago, there were a few, and some concept places, but most everybody was slogging through the piles of paper and high latency, high labor means and methods. Ugly stuff.

Today, it's possible for a CAD dataset to drive all of it. And parts of that are still ugly, but improving. AEC can do this too. Again, it's being done by a few progressive groups out there, and that's direct experience. I happened to be one of the Mechanical CAD guys willing to work with them and translate techniques, and understand their scenario, language, requirements, etc...

Hey, this also ties in with boats...

So, when a yacht gets made, it's custom. They do ungodly hacks during actual construction. The less up front planning and visualization work they do, the more hacks there are. "Just make it fit, and send the result back to engineering..." Fine. Each boat ships with a document set that basically attempts to detail what happened and why. Tons of books.

One artifact of the CAD process was planning more of those hacks away. Another was packaging up core bits where intent can be well defined. And the docs were derived from all that, then more lightly marked up. All electronic, with a paper back up, just because the sea. The sea is toxic. But, it cost a lot less to do just a little bit more up front.

Forward 5-6 years, I ended up in a scenario where it worked the same way with a building. (I have a dataset of a library in SF that I helped figure out how to work on.) They got their site study docs, and then drove the CAD from there, building in options, expressing intent, etc... On the prospect walk through, they were able to make real time changes and sell that deal. Potent stuff. I heard a lot about "client didn't pay" and being able to work directly with them, real time, can change that equation. And each project, sold or not, does produce a lot of great stuff that can be used on future ones too. One example might be that parking garage I mentioned. Property varies a lot. However, it is possible to build a parking structure with a lot of flexibility and design rule checks that can fit into a lot of scenarios. Worst case, it's an 80 percent basis, for the 20 percent real work of dealing with that particular site.

When site construction began, a computer went to live there and all the coordination, handling of problems, documenting variances, distribution of materials, measure, etc... happened on it. Sometimes drawings got output, maybe for the more device toxic tasks (and that gets tossed when work is done), and a lot of people read 3D data and worked from it directly on a laptop or pad. Walk on site, pull up model, expose the relevant bits, compare, measure, do.

When the building is done? Computer stays there. It's the living record of the thing.

Employing these means can yield the ability for doing more than one project at a time too, or at the least, more projects, faster, with fewer people.

Right now, cars, airplanes and other big things also have those living records too.

What I'm trying to get at here is the clusterfuck of CAD (and it's often just that) has a lot to do with how connected and fluid and accessible the data is to people. When the people can work from the data, and that data is derived from the model and the intent in that model, a whole lot of that clusterfuck goes away.

I think, after having this very interesting chat, the desire to realize a more compute oriented approach has some elements in common with just making the CAD more widely and directly usable by most people, not just CAD people.


True. And the fact that (from context space), non-CAD colleagues doesn't really know what CAD do, so you must do CAD otherwise it seems that you do not do your job, like pre- and post-CAD things...


Great progress is being made on this. Basic CAD features are now on phones, and we are now able to do things electronically too. This has been a very long and expensive effort.

Before we really see CAD marginalized in any real way, we will see it in everyday use first, and it will be easier.

It already is. Kids can do it now on phones.


It's not progress in any technical sense. I ran Autocad on an 8mhz 8088 PC with 1 megabyte of RAM. It was more capable than anything that runs on a phone these days. It could be automated with Lisp and command scripts. The interface could be altered with menu files.

That's the CAD market, it's surface features not deep capability.


I absolutely beg to differ.

And I was there too. Have ran CAD, most systems you can name, and have done so, starting on an 8 bit Apple 2. (6502, 1mhz, 128K RAM)

A quick look at the high end software and it's advanced, integrated, data and code driven features shows a lot of new capability. You might not see much of it in AutoCAD, but it's out there.

In terms of being able to get the right information to the right people at the right time, what we've got today is absolutely huge compared to those times. In addition to geometry, it's possible to drive multiple, well integrated datasets from the core models and their intent, and it's possible to do so with code, or human input, or some data capture or other.

It's possible to make CAM ready models on a phone, and do so with a pretty capable geometry engine. An old wireframe type system really doesn't even compare.

One of the bigger improvements comes from the lightweight visualization pioneered in the early 90's. That same phone can render your site, with any combination of data needed, and do so in real time, and it can allow measure, move, markup and a lot of other basic things. There is no need for paper in a ton of use cases.

Back in those days, everybody needed paper. Lots. Of. Paper.

In the compute / design optimization space, integrating analysis software with CAD software has brought the ability to design, then simulate and automatically refine and tune models computationally. This can even be made cookie cutter, so that experienced analysis people can boil things down to help less experienced people out. That, plus design rule checks that actually are based on meaningful data from the solid models, means more and higher level design is more accessible to more people.

Geometrically, buildings are kind of simple most of the time. However, if one wants to actually express intent, and derive them computationally, the same kinds of problems exist as they do for, say the product space, or auto / aero.

When it comes to actually making more complex things and changing them, systems today can infer so much that prior generations of software could not. I have a hard time even describing how important and time saving this is.

And data?

Hell, what kind of data do you have? Real model, paper sketch, photograph, old wireframe, solids, surfaces, tabular data... ? The better systems can use any and all of it, and can do so in a robust, most always parametric, and associative way. (the latter, if desired, and that's not always desired)

Now, having been there, I can use an old system to build a lot of stuff today. Most of my younger peers would not even know where to begin. Good. They get a much improved workflow that is an order smarter than anything I ever saw from that time period.

Really, I'm not entirely sure you've seen advanced CAD. Autodesk isn't there yet, but they are working hard right now. Good things happening. It will take them a while, as I've mentioned already. A quick look at what the top two can actually do (Siemens Dassault) will show tons of smarts present now, not present or even possible then.

For experienced people, there is a whole lot there. Not just the surface either. For inexperienced people, it's growing very accessible now. The number of use cases for "non CAD" people to successfully interact with and operate using data from a CAD model is way up and growing.

That same phone, when not used to create something, could very easily deliver all the data somebody needs to do something, and they themselves can learn how to get anything they might need from that model too. It's just not anywhere near as hard as it once was, and paper finally isn't really needed.


Thanks for your replies, you seem both experimented and it's a pleasure to read!

CAD tools being more powerful is good but not when it comes at the expense of powerusers, like (I suppose) brudgers and I are. It's frustating because others trends aren't exploited.

What the sense of being able to do more and better when your scope is more and more restricted? Where are the standard file formats that really carry informations? (small example: Creo and Inventor implemented svg import then dropped it...) CAD is a big walled garden and there are no alternatives, when they'll realize that openness is pure marketable innovation?

As a side note, I worked in a company where "if Inventor doesn't crash everyday, you're not working!"... spend lot of time debugging colleagues, not the soft. And, we were using Inventor less than 20% capabilities. I think that said a lot about a concrete use of CAD.


So you might be interested by the blog of Matt Keeter [1]. It's thesis is worth a read too [2].

[1] http://www.mattkeeter.com/projects/ [2] http://www.mattkeeter.com/research/thesis.pdf


Hi! Most of my recent work has gone into a CAD tool called Antimony, with documentation at http://mattkeeter.com/projects/antimony


Yep! But I forget that one cause it is not exactly what I want.. (btw, I plan to contact you when I have finished a prototype).


I find the article a little off...

The generative design it is showing has nothing to do with AI, it is just pretty advanced math, maybe with some heuristic, maybe not completely deterministic, but I would have an hard time calling it AI.

Other than this, I enjoy the read.


The article doesn't mention AI.


This is a quote from the article:

> "This is the computer becoming creative and able to generate ideas that people help to develop. Beyond creativity, the sea change is the computer’s ability to learn."


Then goes on to talk about how it would use machine learning to develop heuristic methods that could perform shallow design evaluations with much less computation.


Looking at the antennae, I'm unconvinced of the claim that the second is better [though I'm not saying it isn't] because it's not clear how suitable it is as a spacecraft component. The older design looks like it would be easier to store for liftoff (in a cylinder); require a simpler deployment strategy (a linear actuator); and be less likely to poke a hole in a spacesuit during extra vehicular activity. Not that I'm claiming it was for manned spacecraft. Yet that's the sort of corner case where it makes sense to lift an omnidirectional antenna. A point and shoot parabola is the general case.

Parabolas fold and unfold well even when constructed from many parts because a lot of the parts are the same sku. The older design follows this idea by unfolding with a twist. The new antenna looks like it has more parts that are one off and looks like it depends on all of them deploying "just so" (i.e. it looks finicky). Again, the new design may actually be better. But without understanding the mission requirements, signal strength alone does not make the claim reasonable.


Very interesting. It appears to generate a voroni mesh to resolve the structural forces. I can't tell from looking at this to what extent the overall shape is predefined by the requirements or generated by aerodynamic effects though.

The next step would be for it to interpret concept sketches. Anyone who has worked for a design firm will know that most of the people there are interpreting and documenting the concept sketches of a few senior designers. They never write a detailed list of requirements as it's much quicker to sketch what you want to achieve and describe the sketches in conversation with your staff than to try and describe it in writing. Designers are visual people and hate having to write documentation.

Clients write briefs, the brief for a skyscraper will probably stretch to a few thousand pages. But, as programmers will know too, you have to take some licence in interpreting them as they often contain a lot of contradictions.


Dreamcatcher sounds interesting - and in some sense, a return to older workflows. A friend worked at an F1 team at a point when they were transitioning from 2D draughting to parametric 3D CAD - One of the big issues was that in the prior setup, the designers would often draw bearing surfaces & forces on them, and the envelope - leaving the detail to the pattern makers.

Whilst fast, this suffered from the varying skills of the pattern makers, and that they were all approaching or well past retirement age, with no prospect of new blood.

The longer term benefits of the 3D CAD workflow took a while to realise, as it got mired in a lot of implementation detail that would previously have been deferred.


This, but for software UI/UX. Anyone want to put me out of a job?


It's already difficult enough for humans to understand humans, so I wouldn't worry too much about this. What I do expect is that the least sophisticated parts of jobs like conversion optimisation will eventually become largely automated. Humans would still input the constraints, but the machine generates the inputs for a multivariate test, measures results, decides which variant performs best and rolls it out to all users.

Think The Grid [1] mashed up with Multi-armed Bandid Experiments [2]

[1] https://thegrid.io

[2] http://analytics.blogspot.nl/2013/01/multi-armed-bandit-expe...


This is exactly what I was thinking about. Some way to tell a goal and the constraints to the computer and then have the computer run multivariate tests on generated UI's to optimize for the goal.


We have some programming languages in which the programmer specifies a desired outcome, and the system figures out how to do it:

Prolog and Haskell are two examples.

We'll get there!


Prolog satisfies that description much more closely than Haskell. In fact Haskell is really closer to C than Prolog, if that's the basis for the comparison.


Prolog is probably the only language that gets even close to the original declarative programming promise, of giving the computer a goal and letting it find the way to achieve it.

It looks like machine learning might be moving towards fulfilling that promise, btw.

A bit of a shame that logic programming is not part of that. Prolog and machine learning would be a match made in heaven, if you ask me (you shouldn't, I'm biased).


I think it's worth saying explicitly how Prolog does that. It basically runs a DFS on a graph for you. That's it. You can code up a basic Prolog in ~50 lines of Lisp. It's a pretty simple tool.

It's also fascinating to look how people try to use it as a general-purpose programming language. To do that, you have to explicitly hack for the DFS running in the background, the one you were not supposed to know about (from the POV of designing your program) because it's meant to abstract things away. You literally have to do flow control on a graph search algorithm to do things like iteration.

The point being - Prolog is a useful tool, but it's also a very simple one. It won't do magic for you.


Ah, sure, Prolog is dead simple and not a general-purpose language.

Here's a fun little puzzle then. I wrote this predicate:

==

c(A-B, B-T, A-T).

==

That's all the predicate definition - nothing else to it. One line, yeah?

So, you put it in a Prolog source file, consult it, and then fire queries like this:

--

?-c([a,b|A]-A,[c,d|B]-B,T-B).

A = [c, d|B],

T = [a, b, c, d|B].

?- c([a,b|A]-A,[c,d|B]-B,T-[e,f|C]).

A = [c, d, e, f|C],

B = [e, f|C],

T = [a, b, c, d, e, f|C].

?- c([a,b|B]-B, B-T, [a,b,c,d|T]-T).

B = [c, d|T].

?- c((a,b,A)-A,(c,d,B)-B,T-B).

A = (c, d, B),

T = (a, b, c, d, B).

--

Now, based on your understanding of Prolog from ~50 lines of Lisp- how does that predicate work?


Rule based AI (Prolog) and machine learning are actually considered to be at opposite sides of the AI coin. As far as I know, no one has been able to combine them into a decent combined experience (you either work with rules or trained models).


Define "decent". There's been a lot of work on Inductive Logic Programming, Statistical Relational Learning and generally Logic and Relational Learning (learning Prolog programs from data).

First and foremost is PRISM, that started the trend:

http://rjida.meijo-u.ac.jp/prism/

For newer stuff there's ProbLog, latest version of which is actually written in Python:

https://dtai.cs.kuleuven.be/problog/

And several other programming languages and frameworks:

http://probabilistic-programming.org/wiki/Home

(Though not all of that stuff is specifically logic programming).


It is not as easy as just combining statistics with logic programming. The holy grail would be the ability to seamlessly integrate rule-based reasoning with machine learned models trained on lots of data.


We'll need a good representational form first, a UI/UX language or family of languages, something we can standardize on, develop, and use regularly to first establish representations of key, foundational concepts. Basically, we would need to scrape http://ui-patterns.com/ into a UI object ontology, map a bunch of its unstructured data about why each pattern solves some set of problems, and then we'd need a-whole-nother related problem solving analyzer system that could use that data to produce a software model. OMG recently released an Interaction Flow Modeling Language, http://www.ifml.org/ , and that seems like it has some added range of expressiveness compared to UML and some of their other description languages. I like these OMG guys since they seem to synthesize Semantic Web research and ongoing best practices pretty well.


Do you write your UI to production or just define it in Photoshop?


I "just" define it in sketch.

To be clear, I'm sure you didn't mean to put down the role of design in building a successful software product, but I do like to point out unintended microaggressions when I see them.


My aggressions are neither micro nor ambiguous. I would never say design isn't important, seems counter-productive given that's what I (try to) spend most of my time on

Oh sorry, second to "micro-aggressions".


As a CAD technician I was very aware that the purpose of a drawing was to communicate the requirements to the fabricator i.e. documentation.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: