
Hand Coding A Personal Website - petewailes
http://seogadget.com/hand-coding-personal-website/
======
nmc
Nice tutorial, I just disagree on two small points.

First, there are very sound arguments for not using CSS preprocessors [1].

Second, for a website that you own, using ".htaccess" is discouraged for
performances reasons. The Apache 2.4 docs [2] say:

 _" You should avoid using .htaccess files completely if you have access to
httpd main server config file. Using .htaccess files slows down your Apache
http server. Any directive that you can include in a .htaccess file is better
set in a Directory block, as it will have the same effect with better
performance."_

[1] [http://blog.millermedeiros.com/the-problem-with-css-pre-
proc...](http://blog.millermedeiros.com/the-problem-with-css-pre-processors/)
[2]
[http://httpd.apache.org/docs/2.4/howto/htaccess.html](http://httpd.apache.org/docs/2.4/howto/htaccess.html)

~~~
prottmann
How many Billions of visitors you need, until a local .htaccess slows down
your Website significant ?

~~~
dave1010uk
Very rough benchmarks running "ab" against a random page on localhost:

    
    
        with AllowOverride 215rps
        without AllowOverride 245rps
    

It makes a difference, but there's not enough data to conclude much.

~~~
m3mnoch
so, what you're saying here is that if you skip the ease and convenience of
.htaccess files, you can get better performance?

as in, if you get slasdotted (ha! i'm old!) your site will stay up either way
as long as you're getting less than 215 rps. and it goes down either way if
you're getting more than 245 rps. that's a pretty tiny window in the grand
scheme of things.

yeah... i'd rather my personal site be easier to maintain.

and, as to the security of it -- if you have access to the httpd.conf file, i
GUARANTEE you've got bigger holes in your own hodge-podge, whole-system
security than something like an .htaccess file on which you'd have to work
really hard and explicitly make insecure.

just sayin'.

~~~
laumars
_> as in, if you get slasdotted (ha! i'm old!) your site will stay up either
way as long as you're getting less than 215 rps. and it goes down either way
if you're getting more than 245 rps. that's a pretty tiny window in the grand
scheme of things._

12% performance boost is actually quite a significant jump when you're pushing
heavy traffic and looking to shave any fat from the stack you can find.

Also your comment about _" and it goes down either way if you're getting more
than 245 rps"_ doesn't really make a whole lot of sense as the webfarm isn't
going to magically crash the moment you get one request more. I suspect you're
not really understanding what those ab results are representing, but that
doesn't really matter as I wouldn't trust those figures for any real world
usage anyway. As I pointed out in another post in this thread, the actual
performance penalty will be subject to a considerable number variables.

 _> and, as to the security of it -- if you have access to the httpd.conf
file, i GUARANTEE you've got bigger holes in your own hodge-podge, whole-
system security than something like an .htaccess file on which you'd have to
work really hard and explicitly make insecure._

You have things completely backwards there. httpd.conf is _more_ secure than
.htaccess because httpd.conf can only be amended and actioned by root where as
.htaccess will have lower security permissions and is loaded on demand (ie an
attacker doesn't need to restart the Apache daemon to action any changes).

~~~
m3mnoch
> I suspect you're not really understanding what those ab results are
> representing

ah. no, no. i'm not adequately explaining where i was going with that. lemme
try again.

1) it's not 12% on a web farm. it's 12% on one server.

2) that 12% manifests in a slowly degrading experience. so, it takes 4 seconds
to return during peak traffic instead of 3 and a half. meh. whatever.

therefore, if you get hit by something that will _actually_ make a difference,
it's not going to be within 12%. it's going to be like 12,000%. so, it's not
going to make a whit of difference at that point whether you have httpd.conf
or .htaccess.

>You have things completely backwards there

again, i'm not explaining myself well. i fully understand that, in theory,
httpd.conf is more secure. i'm not talking about that.

if you have access to httpd.conf, you also probably have access to
/etc/ssh/sshd_config -- did you configure that securely? does your server
allow root logins? what about your mysql config? what about the latest
security update to the distro?

i'm just saying, if you've got access to httpd.conf, you've got the whole
server. that means you've probably got bigger fish to fry as to worrying about
security than a pretty harmless, defaulted as secure, .htaccess file.

~~~
laumars
_> 1) it's not 12% on a web farm. it's 12% on one server._

Same difference. 12% on 1 server is 12%. But if you have a dozen servers with
the same 12% gain then it's still 12%. Such is the nature of percentages.

 _> therefore, if you get hit by something that will actually make a
difference, it's not going to be within 12%. it's going to be like 12,000%.
so, it's not going to make a whit of difference at that point whether you have
httpd.conf or .htaccess._

I manage a data centre for a number of high profile sites. If I can improve
throughput by 12% then that means I need one less server in the farm. It means
less strain on the db as connections are clearing down quicker. And it also
means the likelihood of visitors hitting 'refresh' due to slow load times is
reduced, which in turns slows the exponential growth that happens shortly
before a site buckles. So trust me when I say 12% does matter if you're
building busy sites - I know this because I wouldn't be doing my day job
properly if I didn't load test this stuff and implement any free optimisations
I can.

 _> i'm just saying, if you've got access to httpd.conf, you've got the whole
server. that means you've probably got bigger fish to fry as to worrying about
security than a pretty harmless, defaulted as secure, .htaccess file._

Yes, I'd already said the former point myself, however your latter point is
still ignoring the simple fact that you don't need to be root to edit a
.htaccess file. I repeat, you do /NOT/ need to be root to edit a .htaccess
file!

Someone can edit your .htaccess file even when they don't have permissions to
edit httpd.conf (and thus ssh_config nor anything else in /etc nor other root-
owned config hierarchies). Thus if your http docs are writeable (and they
typically are for personally web servers as you don't have a dedicated network
storage such as a SAN shared across your web servers, so your content is
stored on the web server itself) then a bug in your website could allow an
attacker access to write to the disk in the web docs directory. In short,
attackers could create and edit their own .htaccess files.

We're not talking about root access here, these are not attacks that have
permissions to write to httpd.conf; we're talking about a http-owned (eg 'www-
data' on Debian) users who shouldn't have ability to write files having that
ability to edit the Apache config because .htaccess is not typically root
owned.

If you want the convenience of a .htaccess file but the security of a
httpd.conf, then change all your .htaccess files to be root owned:

    
    
        sudo find / -name ".htaccess" -exec chown root:root {} \; -exec chmod 644 {} \; -print

~~~
m3mnoch
man. i must be terrible at communicating.

> Same difference. 12% on 1 server is 12%. But if you have a dozen servers
> with the same 12% gain then it's still 12%. Such is the nature of
> percentages.

no. you said it yourself. if you save 12% on a server farm, you can drop out a
server. with a single vps, you can't or else you go from one server to none.
and, we're talking about self-hosting vs. shared hosting, we're talking about
a single server and not an elastic cloud. so, 12% degraded perf? meh. 12,000%
degraded perf? boom.

> I repeat, you do /NOT/ need to be root to edit a .htaccess file!

yessir. i understand that. i've been writing apache vhost files (tho, i think
back then it was straight-up httpd.conf files and not vhosts - _shrug_
whatever. my memory sucks.) since the late-nineties on slackware machines.

my security point:

1) shared hosting -- by default, you have the ability to tinker with .htaccess
files and NOT httpd.conf. this means for it to be insecure, you have to
explicitly do something silly with it. not just turn indexes on or off.
(indexing security arguments not withstanding)

2) vps/managed/under-your-desk hosting -- this is where you can actually make
changes to httpd.conf. but, you're responsible for the whole machine, not just
apache configs. you have to secure the entire machine and its services from
ssh to ftp (eek!) to security updates and everything in between.

ergo: if the answer to httpd.conf vs .htaccess is "httpd.conf gets more
performance" and "httpd.conf has better security"...

i say the performance isn't relevant in the self-hosted, non-clustered model
and you've probably got bigger security holes in your server to worry about
than some .htaccess files in your htdocs directories owned by www-data.

does that make sense?

~~~
laumars
_> no. you said it yourself. if you save 12% on a server farm, you can drop
out a server. with a single vps, you can't or else you go from one server to
none. and, we're talking about self-hosting vs. shared hosting, we're talking
about a single server and not an elastic cloud. so, 12% degraded perf? meh.
12,000% degraded perf? boom._

That was just an example. But using yours, that's more the reason to try and
save overhead because you don't have the ability to spin up extra nodes when
traffic gets heavy.

 _> yessir. i understand that. i've been writing apache vhost files (tho, i
think back then it was straight-up httpd.conf files and not vhosts - shrug
whatever. my memory sucks.) since the late-nineties on slackware machines._

Ditto here :) I don't use Slackware these days I used to love that distro.
Still miss it at times.

 _> 1) shared hosting -- by default, you have the ability to tinker with
.htaccess files and NOT httpd.conf. this means for it to be insecure, you have
to explicitly do something silly with it. not just turn indexes on or off.
(indexing security arguments not withstanding)_

Well yeah, but it's a bit of a moot point because there are no alternatives to
.htaccess with shared hosting. Plus they'll generally do some clever stuff to
sandbox each users vhost (something along the lines of each vhost running as a
different user, if I recall correctly. I've never provided shared hosting
solutions my line of work is supporting high profile sites / cloud services)

 _> does that make sense?_

Not really. It's just a lazy dismissive argument in my opinion. Saying
something doesn't matter because there's "probably" other issues is the kind
of attitude that leads to servers getting hacked.

You seem a nice guy and experienced as well, so I'm not criticising your
abilities. But whenever I hear others say "I can't be bothered doing xyz and
there's probably other issues", they usually end up getting hacked (or
breaking it themselves) a few months down the line.

~~~
m3mnoch
> Ditto here :) I don't use Slackware these days I used to love that distro.
> Still miss it at times.

right? ...stupid redhat coming along and making it obselete!

~~~
laumars
hehe (though to be fair, I was running Redhat before Slackware). These days I
tend to manage Debian, SLES, FreeBSD (on personal boxes) and Solaris (plus one
or two Ubuntu Server boxes colleagues have set up which I'm planning on wiping
when no-ones looking hehe)

~~~
m3mnoch
ha! that's awesome!

------
dlwiest
I was about to link this to a friend, who has been using Wordpress and
expressed interest in learning more about how websites work, and then OP got
into installing a local Apache server and using LESS to compile CSS, and I'm
afraid it might intimidate her, since she's already hesitant about her
capacity for learning this sort of thing. I'm not sure why either was brought
up though. If OP is comfortable with these tools and wanted to write about his
own experience, I could understand, but for the sake of a tutorial, what
benefit does adding these complications, which present a risk of alienating
some segment of the tutorial's audience, provide? Unless you need a back end
language, you don't need to install a local Apache server; you can run html
files directly from whatever folder they're stored in, and if you link with
relative paths, you can transfer the whole directory to a live server and it
will still work fine, and for the amount of rules a beginner will probably
write for a small, personal website, using LESS is probably overkill.
Furthermore, anyone who's comfortable with either of these practices would
have no problem adapting them to a basic "building a website with a grid
framework" tutorial, and everyone else can pick them up later, when their
value becomes more apparent.

It reminds me a little of the way every Javascript tutorial I find lately
seems to assume that you're running a Node.js server. Outside of maybe the HN
crowd, are most people running Node servers? I doubt it. And it's almost never
necessary; you don't need Node to configure routing or build some custom
directives in Angular, for example, so why insist that people use it?

~~~
Bahamut
Node provides some very handy tools for working locally. For example, usage of
Bower, the client-side dependency, uses Node. Node also adds the ability to do
nifty stuff like utilizing Livereload for automatically refreshing a page when
you make an edit to a file.

I will admit, using all these tools was initially intimidating for me, but
getting comfortable with them was extremely rewarding.

~~~
dlwiest
Oh, there's no question that local Apache and Node servers can be valuable
tools, but if you're trying to teach your reader something that doesn't
strictly require them, why risk alienating people who aren't familiar? How
many people who just want to learn how two way binding works are going to see
a Node.js server as a "requirement" and move on?

------
ozh
I've rarely seen such an off topic introductory image, but I'm more surprised
to see that kind of very beginner info land on HN front page. Nicely written
and everything, but... HN?

~~~
krapp
Everything old is new again.

~~~
kordless
[http://olds.ycombinator.com](http://olds.ycombinator.com)

~~~
krapp
"and in my day, stylesheets didn't even 'compile!'"

------
pyalot2
Having written pages, by hand, for ages now (like 15 years or so), I recently
took this to a new level. I'm now hosting my site on my own hand coded
webserver and no other server in front (please don't make me explain).

~~~
imdsm
Please DO explain. I'm going to guess at a PHP based site?

I couldn't imagine trying to run my personal website
([https://github.com/Imdsm/PersonalWebsite/](https://github.com/Imdsm/PersonalWebsite/))
on a hand-written web server and database server, though if I did that would
be an achievement.

~~~
laumars
It could be something like Go or Java which would provide the basic framework
for you and "all" you need to do is write the handlers (this is how my latest
project is written) rather than writing their own HTTP classes and just using
the languages TCP/IP libraries.

However I too would be interested in reading more about the previous
commenter's set up.

~~~
ptr
The first library I did for my compiled experimental PL was a HTTP server,
pretty similar to what you describe.

I wonder about the performance and reliability of libraries like those (Go's
http package, Ruby Sinatra/WEBrick, etc.); if they can be used in production.

------
sdegutis
The title made me think this was gonna be minimalistic, something like
<html><head><title>My Great Website</title></head><body><h1>Hello
world!</h1><p>Find ye great content yonder.</p></body></html> so I was
surprised to see it using LESS and delving into .htaccess

------
walshemj
So your hand coding but using in OpenDAWS doesn't that defeat the purpose of
doing it all yourself from first principals?

~~~
insertnickname
*principles

~~~
walshemj
so I am dyslexic sue me :-)

BTW was hand coding sites back in 94 :-)

------
mox1
I came to the same conclusion as the author recently, which spurred me to
write a simple, no frills lightweight CMS. For those interested you can see it
at [https://github.com/mox1/webpy-bootstrap-
blog](https://github.com/mox1/webpy-bootstrap-blog)

Personally, I think there's a middle ground somewhere between writing
everything yourself and using hosted Wordpress. I want to control the HTML /
CSS of the pages on my site, but I don't want to end up in a WYSIWYG
environment where all the details are hidden.

------
binarymax
Good for you - keep learning! Knowing how things are built from scratch will
give you a level of understanding that will help in all sorts of situations.

However, I am at the opposite end of the spectrum. I have never used a
template or boilerplate or anything. Every time I've tried to use a prefab
site I end up spending hours tearing it apart and feel like I am going
backwards.

That being said, I cannot really recommend my approach - find a happy middle,
and your productivity will soar.

------
mailarchis
I did something similar few months. I've been developing webapps using django
for a while now. But my frontend skills (html/css) had always been very rusty.
I decided to improve it by building a personal website without using any
frontend framework or doing any server side development.

It was an interesting exercise. The output is here -
[http://archisman.com/](http://archisman.com/)

------
adav
I haven't developed for the front-end web in a while, but is it considered
hand-coding if a boilerplate this-or-that is used at every stage?

~~~
rickr
That was my first thought. I'm also kind of confused at who this is for...the
author is a designer and assumes a firm knowledge of HTML/CSS.

How do you obtain that knowledge without doing anything outside of a CMS?

~~~
aaronem
Quite easily, if you've made a business out of designing Wordpress themes;
part of Wordpress's value proposition is, more or less, "you can make your
site look exactly how you want it to, without needing to know anything about
PHP except how to copy-paste these chunks of boilerplate where you want things
to happen."

------
footpath
Just like to add that if you're using FileZilla as your FTP client, make sure
to not let it save your passwords, as FileZilla stores them in plaintext on
the computer.

[https://forum.filezilla-
project.org/viewtopic.php?t=30765](https://forum.filezilla-
project.org/viewtopic.php?t=30765)

~~~
krapp
One way I found around this is to run filezilla from a script which copies the
password file out of an encrypted drive then nulls the file when you're done:
[https://gist.github.com/kennethrapp/7e58f10e149786baf06c](https://gist.github.com/kennethrapp/7e58f10e149786baf06c)

(I can't find the site I got this from though - and just not saving passwords
probably would be better)

