I think this should be titled "why use kernels," as the gain here comes from using an RBF kernel (and not just a SVM). Kernel-based L2/Tikhonov regression (or kernel ridge regression) could perform just as well (although it might require more memory to train).
* your training set is 66% of the entire set, not 80%
* you use "degree=0.5" when creating the sklearn.svm.SVC, but since you don't specify the kernel type it defaults to RBF, which doesn't accept a degree arg. See  for more.
* you should motivate the kernel transformation more thoroughly; by mapping the data into a higher-dimension space, you hope to find a separating boundary that isn't present in the natural space. The mapping is performed such that we only need the inner product of each vector, which lets us map to ANY space (even those of infinite dimension!!) as long as we can compute inner products.
(There are some properties of the higher-dimension spaces, such as the fact they're Hilbert spaces, that make the above possible, but it's late and I can't remember details off the cuff).