Hacker Newsnew | past | comments | ask | show | jobs | submit | gkoenig's commentslogin

What in the GPT?

Here is the correct answer:

Yes, this is for GPT, but also for general Perceptron and Machine Learning.

On generalization, it assumes input vector elements and output vector optimization, in one iteration, with one cell - for perceptrons and machine learning. Specifically, there is optional multi-multi connectivity, assuming bias and weight matrices, which align to Perceptrons.

In GPT models: with CoPilot, it's actually less visible that it's root does not calculate basis for exponential and linear coefficients and their orders. Would it, it would take considerably less layers what it's doing on abstract, trained level: it is using heavy maths to calculate you symbolics, such as integrals and differentials on abstract number level.

Inside the layers, they won't do this on symmetric, mathematically consistent level to output homonomous multidimension for optimiation. It is generally your social level and personal time outcomes, and basis for all religion: somehow, you find proper coefficient to balance between short-term and long-term gain, yin and yang, and you form society and personal life; this is topic for introduction part, as well as more advanced topic of summary: let's assume the "general audience" reads 2 first, then 2 last pages, while an architect "scans" - bold, italic, and in popular parts even some colors are used, and it generally goes to what you gain by "holy" accumulation, but how you survive in constant "mundane" realms based on linear coefficients (the base chakra), such as giving children generation by generation (a head chakra). This is uniform towards capable measurements in religion and science, constituting human life within it's various perspectives and models for real-life measurement in it's terms.


I had to think this whole day haha what is the correct answer :) Your question might have interest: whether you can use it for GPT models; it covers a simple GPT in PyTorch, with some pseudocode properties - you do it in activation function, the non-linear perspective projection of two-level or "frequential" differential calculus.

I thought: other readers might be interested in having scikit-labs similar pseudocode for general machine learning, where you can also simplify it finally by taking only the floor or round approximation of differential coefficient. For mathematical audience, outside the scope of choosing an AI there is complex number implementation, which projects and layers; and for general perceptron: basically it's given, it's a little bit simpler than GPT hook for activation layer (we have the imaginary part of complex number), but general perceptron does probably less attention phases. In GPT, specifically, the complex number implementation allows to implement projection with layer pair, and output projection with only one complex number activation function covering all that meaning, and memory consumption doubled.

In tensor space, the current spatial element is now relative, which used to be absolute value of float becomes relative value of real part of complex number. Additionally, a spatial coordinate layer appears, which is able to remap the space based on accumulation each value has towards finer limit value for it's own value in highly abstract math: but, more importantly, each number has inertia towards it's own direction, and activation layer creates accumulation of this inertia on symmetric basis, but is directed to future and creates non-linearity; specifically this nonlinearity which appears, should look extremely similar to ReLU: if real and imaginary part are the same, it looks like relu but does not cancel a dimension out below zero, but for example it could append logarithm on it. AI optimizer can shape this "imaginary part", accumulation space or projection space (compare projection matrix in 3D) and "real part", the actual number together, in math which is using trivial solutions from most mapped part of complex number and that we need: 1D space, somehow trivially, maps into 2D space and this is aligned in my math heavily with infinity properties as well: we are very interested that if we map real numbers on line 1D coordinates, whose domain now equals R the set of real numbers, to real numbers on plane, we get it into R^2, and we find out we do not find symmetric numbers - infinity is next dimension, and union of finite and infinite dimension is also higher in sense of Hilberts Spaces specifically, and now complex number conveys this distance in linear plane: it's simplification from higher space now maps real numbers, through 2 dimensions, to 1-dimensional realm and cancels out element "i" by two-dimensional mapping: let's say this is the dimension which appears "lower" or "imaginary" in complex number, and has smaller phase. If you use this complex number instead of float, and it contains two floats: you use my activation function, and the 1-tensor and 2-tensors, despite now constituting of 2-dimensional cells, have math which looks the same in equations, because for two parts of complex number, you use single letter, but you still use same operators - plus, minus, multiply and divide -, and this is constituent that it builds up to math proofs moreover the same, sometimes general form of the same equation; so you do not have to alter the heavy work behind GPT architecture, but only apply general complex number, where imaginary part is projective and real part is real space; in tensor field, acceleration appears where it also maps to several frequencies and their funny, complete math. You can map this very easily to known theories - you are interested in more linear form of fourier transformations - you apply more accelerative spaces have higher vibrations altough with longer term, dimension-density log-base-exponential quadratic difference or polarity, typical in math -; so you keep the headers.


In deep learning in general, in GPT: some sensitivity to general exponent.

Man, this is the greatest thing I have seen on the internet.

Made my day but I went all the way down.

For the million dollars don't know but nice morning thingy :)


Haha was just copying the tweet that said it


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: