
Is Software Engineering Possible? - danarmak
https://vanemden.wordpress.com/2017/06/07/is-software-engineering-possible/
======
ScottBurson
Edsger Dijkstra famously derided software engineering as "how to program when
you cannot".

But if being able to program means being able to write large programs and get
them perfectly correct, then no one has ever been able to program, the great
Prof. Dr. Dijkstra not excepted. We all have puny brains next to a million
lines of code. That being the case, the study of "how to program when you
cannot" is critical for those of us who actually need to write these large
programs.

I like to say that software engineering is _the art of managing complexity_.
Designing abstractions to be as general as possible, with interfaces as simple
and clean as possible, and minimizing coupling, makes it much easier to think
about how the major components of the system interact. And that's the key: if
it's too hard to think about the system at a high level of abstraction,
because the interfaces have too many special cases in them, more mistakes will
be made.

I don't think I can agree with the OP that formal verification is the only
thing that can fairly be called software engineering. I do agree that the more
verification we can figure out how to do, the better. But I would point out
that well-designed abstractions are still important even when you're doing
formal verification; indeed, possibly more so, because without them, the
proofs get to be even more complicated and difficult than they need to be.
It's really the same phenomenon: systems that are hard to think about are also
hard to prove things about.

~~~
barrkel
_Designing abstractions to be as general as possible, with interfaces as
simple and clean as possible, and minimizing coupling, makes it much easier to
think about how the major components of the system interact._

I think this is a false statement. In fact I think you're confusing modularity
with abstraction.

Abstraction hides complexity, and abstractions are leaky; how the whole
behaves can be strongly influenced by behaviours that leak from the
abstractions. It took a couple of goes, but eventually we realized that DCOM
and CORBA were poor ways to design distributed applications.

Abstractions should be as general as necessary, where necessity is subject to
judgement.

Make code too abstract, and people may not understand it because it's not
concrete enough, full of indirections and compositions and factory-factories;
make it too concrete, and people may not understand it because it's too
complex.

Creating an abstraction to share some code may pay off if there's sufficient
clients for the code and they all have a similar usage profile, but it may be
a false economy if the clients are few or the usage varied.

Modularity isn't an unalloyed good either. Too many modules and you may write
code that acts optimally in a local way, but suboptimally globally because it
has no access or awareness of other modules' behaviour. Modules designed
around an abstraction may be substituted, but designing for module replacement
is more often than not over-engineering. It's strongly encouraged by modern OO
testing approaches, but IMO it puts the cart before the horse.

I'd like more people to realize that abstractions represent unknowns, and that
any given implementation needs to be tested in an engineering materials sense,
to find out what its force limits are, how it reacts under scale and pressure,
so people using the abstraction are _not_ just thinking about the abstraction,
they are considering the real thing they're dealing with as well. It saves a
lot of time going down dead ends.

~~~
jnwatson
That we're not all programming in machine code is evidence that some
abstractions are useful.

The most vital abstraction is that most software is blithely unaware of the
physical construct embodying it.

Abstraction isn't the enemy. I think much of what we call good "software
engineering" is identifying, designing, and implementing the "right"
abstraction.

------
esfandia
Calling the discipline Software "Engineering" was the worst thing that could
be done, at least given the current state of the art. Because of that, people
thought you should write software the way you build a bridge or a car. The
waterfall method came out of that, and it took decades to realize and recover.

Software development is more reliant on discrete math (and more generally the
ability to reason abstractly) than math that helps model the physical world.
Hopefully, Professional Engineering bodies will one day realize this and will
adapt their program accreditation criteria accordingly.

~~~
Koshkin
> _the worst thing_

On the contrary, the worst thing was the abandoning of the sound engineering
principles and the obligatory professional certification. Since then, the
trade has been open to, and flooded by, know-nothing lazy half-wits who,
should they find themselves in any other industry or medicine, would be
immediately disqualified (or, rather, would not be admitted in the first
place).

~~~
confounded
Why do you think they are prevalent and successful?

~~~
TeMPOraL
Because the market doesn't reward useful, quality work as much as it rewards
throwing crap together and investing in marketing.

~~~
confounded
But do we mean quality and usefulness as technical attributes of a program's
construction at a point in time?

Or as beliefs of a customer about a product, or a company?

The skills to get to the latter are often different to the first, and both are
expected in many modern software engineers. The later is much more about
iteration, experimentation, empathy, and judgement. It's not worth polishing
areas of a system which you anticipate abandoning based on the results of an
A/B test. Writing it and abandoning some meh code can still be the best
engineered choice in a product development _process_.

The engineering of the development process matters more than the program.

I'd argue that success in modern software engineering (at least in Silicon
Valley) is at least as dependent on understanding social processes, as it is
on computational ones.

~~~
analog31
Oddly enough I think there's an analogy in mainstream engineering. If you
build a supermarket, it has to be engineered for safety. But the engineers are
not responsible for whether the mall serves a purpose, such as selling
products for money.

------
exabrial
Weeks of coding can save you hours of planning.

~~~
LeoNatan25
But then it wouldn’t be called “hacking”, and so wouldn’t pass as software
development these days. Instead, let’s have no planing whatsoever, no design
and no requirements, have minimal time for actual development, “hack” code
together and see where bugs take us. Aka “scrum”, “agile” and whatnot.

~~~
Joe-Z
I feel like the problems you describe here (constantly changing requirements,
minimal time for development, "hacking" code together) have less to do with
agile methods and more with people-stuff.

I think agile methods can help a great deal in software development,
especially breaking apart your projects into more discreet steps and
supervising progress or giving the possibility for correcting course via
sprints. Also using eXtreme programming tools like pair programming or code
reviews to avoid having people slip into a frenzy of their individual domain /
help keep obvious bugs out.

Having minimal time and changing requirements is just part of the game. And
people "hacking" code together is just... well bad craftsmanship really

~~~
javabean22
>>using eXtreme programming tools like pair programming or code reviews to
avoid having people slip into a frenzy of their individual domain / help keep
obvious bugs out.

Wtf does this even mean? What are you talking about. If the bugs are obvious
why do you need extreme programming to find them? What frenzy? Extreme
programming is garbage.

~~~
LeoNatan25
Agreed.

From my experience, when a proper preparation is done and there are clear
requirements, most of the "obvious" bugs do not even come into play. They do,
when "hacking" a feature and / or working in "POC" mode, which, for me, is not
software engineering.

~~~
Joe-Z
For me it isn't either and I'm certainly not advocating for "hacking" features
together. The obvious bugs I mentioned were not design bugs, but simple slips
that creep in when you've already worked for a few hours and your attention
starts lacking.

------
dang
"Whither Software Engineering?", discussed in the OP, sounds like an
interesting historical piece about how long it took for mathematics to be
integrated into engineering work. The author is John Allen, who wrote _Anatomy
of Lisp_.

Can anyone find that paper online?
[https://news.ycombinator.com/item?id=962733](https://news.ycombinator.com/item?id=962733)
points to a blog post about it from 2009, but none of the links seem to work.

~~~
vmarsy
You can find papers more easily one "Google scholar", I found it there:

[https://news.ycombinator.com/item?id=14528514](https://news.ycombinator.com/item?id=14528514)

~~~
dang
Ah of course. Thanks!

------
johnbender
FSCQ is a really great example of a large system with proofs of correctness
using extraction from Coq.

Another well known project is CompCert the certified C compiler [1]. Which has
seen a fair amount of external testing and use in verification of GCC and
Clang as a reference for checking invalid compiled semantics [2] (to say
nothing of compiling programs).

1 [http://compcert.inria.fr](http://compcert.inria.fr)

2
[https://blog.regehr.org/archives/1052](https://blog.regehr.org/archives/1052)

------
protonfish
The problem with this line of thinking is that a piece of software is
ultimately built to solve a problem in our real, messy, uncertain world. There
is no way to "prove" that an application served its intended purpose. As Fred
Brooks wrote: "Even perfect program verification can only establish that a
program meets its specification. […] Much of the essence of building a program
is in fact the debugging of the specification."

~~~
donovanm
Exactly by the time you could formally prove the application is correct, the
market will have moved on to something different. Right now the combination of
quick to develop and mostly correct seems to be more desirable to companies
then slow to develop and formally correct. The success of languages like
JavaScript and python show this.

------
tunesmith
So what would such a curriculum look like? I've tried diving into this a few
times and it often feels like hurling yourself against a wall in hopes of
picking up bricks. You actually do make progress after a time but it's
difficult to determine the curriculum flow.

For me I think it started with reading a throwaway aside on "type
verification" and wondering what the heck that was. After a bunch of wikipedia
searching into different types of type calculus (not fun), I saw a reference
on dependent types, which led me to Coq tutorials, and then, in an effort to
be more practical, learning more Scala and a some Haskell - reasoning being
that even if they didn't have full support for dependent types, at least their
type systems were on the right track.

Failed efforts so far have been learning TLA+ and Idris (I haven't yet found a
way to apply any learning), and trying to get through some youtube lectures on
homotopic type theory. Maybe next time.

~~~
pjmlp
I don't know, maybe something like this, degree in informatics engineering
(sorry Google translate):

[https://translate.google.com/translate?sl=pt&tl=en&js=y&prev...](https://translate.google.com/translate?sl=pt&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=http%3A%2F%2Fwww.di.fct.unl.pt%2Fensino%2Foferta-
formativa%2Festrutura-curricular&edit-text=&act=url)

Check the "Consolidation Units" and "Specialization Units" required for
earning enough credits.

Also the degree is validated by the Portuguese Engineers Society.

Just one example, most Portuguese universities offer similar degrees.

I am a strong proponent of Engineering in our work and how software should be
under the same quality regulations as other products in general, not only when
there are human life's at stake.

~~~
reitanqild
Looks a lot like mine (also western European but with more sysadmin topics and
less graphics and AI).

------
maxander
There exists a body of theory about how to do software engineering as
described here, but there doesn't exist a sizable community of software
engineers willing to _do_ this kind of work on a reasonably large scale.
Judging by HN (which itself probably represents the upper-end of 'engaged'
software developers), most people fidget uncomfortably when forced to
explicitly declare derived types, and gnash their teeth when faced with
Haskell monads; who among you would make a career out of writing Coq proofs
about system correctness?

Optimistically, the community might come around to realizing that this sort of
development is at least _desirable_ , and we might wind up with the basic
components of popular OSes, or of firmware for dangerous things like cars and
medical devices, being produced via provably-correct processes. But that will
almost certainly wholly exhaust humanity's reserves of strongly math-educated
software engineers (not to mention, industry's willingness to pay for it.)
Software engineering may be impossible because _most humans don 't like math_.

~~~
LoSboccacc
> there doesn't exist a sizable community of software engineers willing to do
> this kind of work on a reasonably large scale

also on the other side there's a very limited pool of employers willing to pay
for this kind of engineering - aereospace and military mostly, and the second
is cheapening out too.

------
mbell
I think a significant issue in software is the comparatively unconstrained
input space.

I would imagine the average natural gas power plant would have more than a few
issues if you fed it pure hydrogen instead of whatever methane based mixture
it was designed for. Similarly I imagine there are other inputs that have
tight specification, lubricants, etc. But, these inputs are relatively easy to
control via both human and machine processes and there aren't that many
combinations you would consider valid. There are complex inputs in the
physical world, for example the impact of complex wind patterns on bridges,
but I still feel this is simpler than many input spaces in software.

In software, at some level you're almost always dealing with human or machine
generated input which may be very complex in nature such that that even after
all reasonable efforts at validation the size of the possible input space is
massive if not effectively infinite. We're getting a bit better at dealing
with this, e.g. fuzzing, but there is a long way to go for fuzzing to be
viable on more complicated inputs.

------
leovonl
Computer Science is a science field, therefore computer scientists usually see
themselves as something more than an engineer - you don't want to apply the
same knowledge over and over to make something exactly as the known knowledge
tells you to do, you want to try new things and experiment with new ideas.
That's the general mentality of the degree by itself.

Real "software engineering" would have to be done with Agda, Coq, or similarly
tools - theorem provers that guarantee your solution is sound. It is done that
way for software that absolutely cannot fail (eg, aerospatial). Unfortunately,
engineers are usually too far away from this kind of knowledge - and sometimes
even doubt this is possible or doable, even if you direct them to the research
(!).

By the way, software engineering disciplines are usually depressive OO
garbage, frameworks and all that non-sense that comes with excessive Java
usage, and frankly never teach anything about reliability or even proving
programs correct - which should have been their primary goal.

~~~
pjmlp
Having a degree in informatics engineering I strongly disagree with that,
given the lectures we had as part of our curriculum.

------
lr4444lr
When you build a bridge, a power station, or a wind turbine, you're creating
something of known economic value for several decades, even centuries. The
whole process is expected to have little to no business deliverables for
years. That lends physical engineering nicely to an established set of
standards. In software, we're often innovating entirely new products, and
people want to see working proofs of concept, and see organic growth with the
user base without getting locked in to preexisting concepts of what the
product _should_ do the business figures out what the market is.

The nexus in products like vehicular and medical equipment software might be a
good place to start implementing something like a discipline, but those
systems are so much more singular and simplistic in their product function
(not the implementation, I realize) than what the average software dev is
working on that I don't even know how much downstream influence it would have.

~~~
wbl
Your hot new gamified Pets.Com for mobile doesn't need to be engineered. A web
server handling payments absolutely should be.

------
mtraven
Nice title, although I don't think the proposed solution (formal proof
methods) is the answer. My own view is that building software is more of a
design process than an engineering process, for the simple reason that
software rarely has the kind of fixed requirements that make engineering
solutions possible. Most software is constantly evolving and the factors that
make it possible to evolve well include things like conceptual clarity and
modularity, about which formal methods have nothing to say.

There was a movement around 20 years ago to establish a field of practice
called"software design" but I don't think it went anywhere. The problems
remain the same though.
[http://hci.stanford.edu/publications/bds/](http://hci.stanford.edu/publications/bds/)

------
Joeri
There is a movement going on from weak typing back to strong typing which is
the same idea approached from the opposite (practical) end. Have the computer
do more work to verify program correctness by rejecting invalid expressions.

Maybe the two will meet at some point?

~~~
Joe-Z
I never got the excitement that some of my college classmates had for
JavaScript because they didn't have to 'deal with types'. I always much
preferred a compiler telling me that this won't work instead of having to find
some obscure type bug at runtime. Don't get me wrong, I still very much liked
programming in JavaScript for other merits, especially the functional aspect
or flexibility of extending your program simply in a browser command line
window.

EDIT: I kind of got distracted by my JavaScript-story here. What I originally
wanted to post was: I always had an interest in philosophy and the logical
thinking you can apply in software development (i.e. you can save yourself so
much trouble by just thinking through your system thoroughly before writing
even one line of code). However, the general sentinment that I also got from a
lot of my classmates was that they didn't want have anything to do 'with any
of this stuff' (this stuff being the humanities) but didn't have a problem
with cracking hard math problems day in day out (which for me at least is
pretty similar to using logical thinking / philosophy on a less abstract
level).

Sorry, if by my large edit I confused some people!

~~~
buzzybee
You wouldn't be alone as a philosopher-developer, there are a decent number of
us out there.

However - shipping code is often very organic, cultivated stuff which foils
attempts to establish a coherent philosophy, a through-line of "how it works"
\- it works because it worked the day before, all that changed was that a
little bit more was added. Repeat till broken, then refactor back to
sanity(else abandon ship). It's Sisyphean hill-climbing.

This in turn plays into the falsity of "engineering" as a way to describe
what's going on with a lot of code. You can do it in the small, but at any
scale the codebase habitually becomes a living organism. And that tends to get
puzzle-solver types excited, because if it's alive and keeps changing, then
they will have endless problems to solve forever!

So I believe the philosophers of the crowd ultimately tend to move away from
the applications coalface and look for something relatively smaller that does
allow some time for reflection and distillation of the problem.

~~~
Joe-Z
I find your last paragraph very interesting. May I ask, did you make that move
yourself and, if so, where to?

In my daily work I often refactor an old, organically grown, codebase and
that's exactly what I do:

I try to think of what the system actually is / should be and try to put this
in code in as concise terms as possible (that's kind of a general description
of software development, but I just wanted to contrast this to simply
hammering in the quickest fix you can think of). A lot of times I wonder
though, if that is really what is needed and a simple 'prototyper' kind-of-
developer would get the job done faster, although maybe not as elegantly
(slight humble-brag there).

------
nunez
Teams that are 100% invested on having clean, tested and reliable code _can_
have software engineering, absolutely.

The problem is that outside of the tech industry, most of the people with the
pursestrings don't see the value in it _most of the time_.

Why? When their main widget makes $100M/month (for example) and its current
maintenance, as fucked as it might be, is ($1-10M/month, or 1-10% of revenue),
spending, say, $1-10M to drop that OpEx to half of current costs, prima facie,
doesn't make a lot of sense.

What changes that equation is when some event that _could have_ been prevented
by good software engineering principles occurs. Good examples: the latest
British Airways IT meltdown, LinkedIn's massive data breach, 100's of millions
of CC's stolen from TJ Maxx and others.

You see, when something happens that threatens that $100M/month cash machine,
many millions of dollars of bonuses, raises, hirings, parties, expansion
plans, and other general symbols of growth and progress get threatened as
well. THAT is what people with the pursestrings don't want to see messed with.

The overall problem with this is that it's _reactionary_ , and every reactive
event has a lifespan. So it takes finding someone with connections that can
either capitalize on an unfortunate event trailblazing a path for software
engineering to happen, or tell a good enough story of previous woes and
misfortunes that can make that happen all the same.

TL;DR: If you want "software engineering" to happen at your company, you need
to tell a compelling story of how NOT making it happen will lead to great
losses, and then you need to be extremely patient.

------
dkarapetyan
I've looked into Coq and like a lot of others here tried to learn the theory
and application but it is so far removed from my daily work that the benefits
are basically non-existent. I've gotten way more mileage out of trying to
learn systems theory and cybernetics than anything related to dependent types.

People think this is a tool problem but it's not. Well, the tools kinda suck
but fundamentally it comes down to culture and mind share. My background is in
pure math but my programming knowledge is steeped in the hacker tradition. It
is very hard to overcome all that momentum.

There is no killer application for verified programming. Whatever you can do
in Coq in 10 weeks you can do in C in 2 weeks with a few extra buffer
overflows. We'd be further ahead if there were more tools for validating C
code than trying to rewrite the world with dependent types.

~~~
Joe-Z
You're right it does come down to culture and mind share. That's why the
author already admits this would be a long process. But I think he's right and
to truly call what we're doing software _engineering_ there needs to be a more
formal basis for how to do things and especially to discern between this is
right and this is wrong.

No offense to you, but often times people that entered programming originally
from related fields and taught it themselves give me the hardest time working
with them. Like, no it's not enough that the software now provides this
feature but you copied 1.200 lines of code for it and just squeezed in your
if-branches and for-loops somewhere. Your job as software developer is to
think how you can wed these new requirements with our existing codebase for
others to build upon.

Sorry, but as a bit of a pedantic and clean code enthusiast this is kind of a
passion topic for me :)

------
powera
There’s a terrible trend in the San Francisco tech scene where “engineer” is
viewed as a synonym for “software engineer”, which is itself a synonym for
“programming”. It isn’t.

That said, I tend to disagree with people who believe the solution is "use
Coq".

------
nradov
We have existence proofs that Software Engineering is _possible_. For example,
look at what NASA did with the Space Shuttle flight control software. But for
most business domains, real engineering simply isn't cost effective.

[https://www.fastcompany.com/28121/they-write-right-
stuff](https://www.fastcompany.com/28121/they-write-right-stuff)

------
ebcode
I'd like to contribute a definition of the words "engineer" and "engineering"
from an old dictionary I had lying around.

From Webster's (1948):

engineer: n. 1. [Rare], a person who makes engines. 2. a person skilled or
occupied in some branch of engineering: as, a mechanical _engineer_ , an
electrical _engineer_. 3. the operator of an engine; especially the driver of
a railroad locomotive. 4. in _military science_ , a member of that branch of
the army which is concerned with the construction and demolition of bridges,
roads, and fortifications, the laying and sapping of mines, etc. Abbreviated
E., e., eng., engin., engr. v.t. 1. to plan, construct, or manage as an
engineer; hence, 2. to contrive; manage skillfully; superintend; guide (a
measure, action, etc. _through_ ).

engineering: n. 1. the planning, designing, construction, or management of
machinery, roads, bridges, buildings, fortifications, waterways, etc.;
science, profession, or work of an engineer: abbreviated E., e., eng., egin.
2. a maneuvering or managing.

------
millisecond
Feels like some stability in computing generations (CPU/GPU) or interaction
paradigms (mobile) will be necessary to formalize software engineering. Until
bridges/etc had a relative standard building each one was a craft. Not to say
there hasn't been generational tech in traditional engineering, but generally
not on the order of every few years.

~~~
Jtsummers
It'll help, but it's not essential. There are practices that we can adopt
today that will formalize software engineering. We should look to systems
engineering and modeling (as used by other engineering disciplines:
prototypes, mockups, math models, etc.). We already have many of the necessary
formalisms, either from the fields of Math and CS, or from other engineering
disciplines (notably systems engineering).

The problem for software engineering is that it's really easy to model with
code, and those have an unfortunate tendency to become the final deliverable.
We have short iteration cycles that make the benefit of systems engineering
and large scale engineering approaches less obvious. In CE it's quite obvious
that a model of a bridge (physical and mathematical) will have benefits
towards the final bridge construction. It's also obvious that it's just a
model.

In software, the model is too often functional enough to pass as the real
thing, but fails in various ways: was constructed haphazardly allowing
security vulnerabilities or stability issues; wasn't designed to run quickly
or to scale effectively across multiple servers; not developed with
maintenance in mind.

------
westoncb
I think it's probably not possible. Here's a way of thinking about why that
may be: what we're doing when writing software has as much in common with
doing science as it does with engineering. By 'science' I mean coming up with
a formal description of how some physical system behaves. If we're willing to
generalize this activity to include non-physical systems then, I'll argue,
software development would be included as the same kind of thing.

Fundamentally writing software is about constructing a formal description of a
less formal 'specification' of some abstract system. The architecture used in
this formal description corresponds to what's ordinarily called theory in
science; when we execute our formal description, it corresponds to performing
an experiment, which may lead to the discovery of anomalies (i.e. bugs);
sometimes this anomalies are significant enough that we are led to revise our
theory (i.e. refactor); if our re-conceptualization is significant enough and
we decide to use an entirely new architecture, you're running into something
like a paradigm shift with all the resistance and other social factors this
entails.

Coming up with architectures/theories to formally describe systems in the
foregoing manner is not something we know how to make reliable. If that goal
is clearly out of sight for science, why would we expect it to be achievable
in the sibling discipline of software development?

The exception I see to this (which admittedly is a large exception) is that
probably most programs which need to be written are very similar to programs
that have been written before. If we had some unified process for evaluating
architectures for these programs that we write over and over again, then we'll
probably eventually stabilize on 'true' architecture for that class of
program. We see this happening to some extent in the creation of 'frameworks,'
but maybe we could come up with better measures for comparing the aptitude of
competing frameworks?

It could also be that the physical systems described in science tend to be
amenable to simpler description than typical software specs, which may be
loaded with non-unifiable features. In this case, frameworks/theory only gets
you so far because most of the work is just describing special cases...

Anyway, maybe the 'science' we need to found software engineering on top of
would be a formalization of this general process of coming up with formal
descriptions of either abstract or physical systems. I bet there are
regularities there which could be exploited for systematization—and if nothing
else, there are likely insights to be had just through seeing software
development as a thing having significant relations to the practice of
science.

Edit: phrasing, typos, etc.

~~~
mritthaler
I think really hits the nail on the head. It also may be the foundation for
why we find things like neural networks more usable for things like vision
than directly formalizing the approach. In other words, you could use an
approach like coq to verify the neural net software calculates correctly, but
you couldn't use to formalize what it calculates.

------
haskellandchill
I hope things go this way. I'm betting my future on it.

~~~
archgoon
The best way to predict the future is to invent it ;)

Good luck!

------
faragon
In my opinion, "Software Engineering" has two main problems:

\- Incompetent management.

\- Incompetent programmers.

Incompetent management, because of "sky is the limit" effect, pretending being
some Steve Jobs or similar, instead of doing the real job: risk management,
resource allocation, put competent people in charge, and being involved so the
project(s) are kept on track.

Incompetent programmers, because not being qualified enough, not having enough
experience, or whatever reason making the programmer being nihilist, or a mix
of the previous.

If you have competent management, and competent programmers, you'll deliver.
The problem comes when you have incompetent and/or corrupt management, taking
decisions from emotional impulses, or thinking in their bonus first, hiding
the problems, and making the "bomb" explode years later. Then, you'll receive
the excuses: legacy code, architectural problems, blaming contractors,
processes, methodology, etc.

------
personjerry
No, it's necessary

