I don't think that's fully accurate (full-disclosure: I've done the technical review for this article).
First, as for "serialization" vs "deserialization", it can be argued that the word "serialization" can be used in two ways. One is on the "low level" to denote the specific action of taking the data and serializing it. The other one is "high level", where it's just a bag where you throw in anything related (serialization, deserialization, protocols, etc) - same as it's done on Wikipedia: https://en.wikipedia.org/wiki/Serialization (note how the article is not called "Serialization and deserialization" for exactly these reasons). So yes, you can argue that the author could have written "deserialization", but you can also argue that the author used the "high level" interpretation of the word and therefore used it correctly.
As for insertion not happening and balancing stuff - my memory might be failing me, but I do remember it actually happening during serialization. I think there even was a "delete" option when constructing the "serialized buffer", but it had interesting limitations.
Anyway, not sure how deep did you go into how it works (beyond what's in the article), but it's a pretty cool and clever piece of work (and yes, it does have its limitations, but also I can see this having its applications - e.g. when sending data from a more powerful machine to a tiny embedded one).
I thought about it (quite a lot actually), but eventually came to conclusion that this would end up being misleading and widely misinterpreted.
For example, one could argue that running a modern grammar checker over an article and based on that doing comma fixes should already be marked with "AI was used to create this article". But reading a statement like that makes folks think "AI slop", which would not be the case at all and would be insanely unfair towards the author. Even creating a scale of "no AI was used at all" → "a bit was used" → "..." wouldn't solve the misinterpretation issue, because regardless of how well we would define the scale, I have zero hope that more than a handful of people would ever read our definitions (and understand them the way we intended) posted somewhere on our website (or even in the zine itself).
Another example would be someone doing research for their article and using AI as a search engine (to get leads on what more to read on the topic). On one hand this is AI usage, on another it's pretty similar to just using a classical search engine. Yet still someone could argue that the article should be marked as "being AI enhanced".
There are also more popular use-cases for AIs, like just doing wordsmithing/polishing the language. A great majority of authors (including me) are not native English speakers, yet folks do want their articles to present well (some readers are pretty unforgiving when it comes to typos and grammar errors). LLMs are (if used correctly) good tools to help with the language layer. So, should an article where the author has written everything themselves and then used AI to polish it be grouped in the same bag with fully AI generated slop? From my PoV the answer is a pretty clear "no".
Anyway, at the end of the day I decided any kind of markings on the article won't work in an intended way, and outright banning any and all AI usage won't work either (would be hard to detect / there is no sense in some cases / there are reasons to allow some AI usage). But - as you know, since you refer to our AI policy - I still decided we draw a line in a kind of similar place to where some universities draw it.
In general when selecting articles we assume that the reader is an expert in some field(s), but not necessarily in the field covered by this article. As such, things which are simple for an expert in the specific domain, can still be surprisingly to learn for folks who aren't experts in that domain.
What I'm saying is, that we don't try to be a cutting edge scientific journal — rather than that, we publish even the smallest trick that we decide someone may not know about and find it fun/interesting to learn.
The consequence of that is that, yeah, some article have a bit clickbaity titles for some of the readers.
On the flip side, as we know from meme-t-shirts, there are only 2 things hard in computer science, and naming is first on the list ;)
P.S. Sounds like you should write some cool article btw :)
For what it's worth, I am only a mid-tier nerd and after reading this issue, I feel like I am your target audience. Nothing deep or overly-detailed, just lots of jumping-off points for me to learn more. Thanks!
Thanks for the suggestion! I wouldn't mind having such articles in PO! tbh - let me think what can we do about it (or rather: let me pass this to the rest of the team so they think about it too).
I think we should try. We'll have to figure out how to mark it so that it's clear it's a work of fiction, though on the other hand it might be obvious. Anyway, submit it :)
Thanks! I also told Aga via email in the thread where I submitted my article.
Worth noting that the HTML tag in the title was stripped from the PDF table of contents as well, so the title for that article in the contents is missing a word. No big deal, but good to know for future submissions!
First, as for "serialization" vs "deserialization", it can be argued that the word "serialization" can be used in two ways. One is on the "low level" to denote the specific action of taking the data and serializing it. The other one is "high level", where it's just a bag where you throw in anything related (serialization, deserialization, protocols, etc) - same as it's done on Wikipedia: https://en.wikipedia.org/wiki/Serialization (note how the article is not called "Serialization and deserialization" for exactly these reasons). So yes, you can argue that the author could have written "deserialization", but you can also argue that the author used the "high level" interpretation of the word and therefore used it correctly.
As for insertion not happening and balancing stuff - my memory might be failing me, but I do remember it actually happening during serialization. I think there even was a "delete" option when constructing the "serialized buffer", but it had interesting limitations.
Anyway, not sure how deep did you go into how it works (beyond what's in the article), but it's a pretty cool and clever piece of work (and yes, it does have its limitations, but also I can see this having its applications - e.g. when sending data from a more powerful machine to a tiny embedded one).
reply