And according to their page , they've been doing this for a decade. This is such an accomplishment.
But why? I thought PSD was all-binary...
Better outputs for more inputs in better time. Beautiful.
Reminds me of Photopea... Oh, I wonder whether they are in competition? Probably not since Photopea seems to be more about image editing and Avocode is more about design and handoff.
what? it's well been documented for ever. 2nd hit on google
As a linux user, there are a plenty software that can render it perfectly but cannot edit, and i really don't get it.
As a web developer i have to use photopea to be able to extract assets from psd files.
It's really strange that a web application can edit psd and a lot of powerful softwares like gimp and Krita have the exact problem with layers and masks with modern .psd files, so my guess its bcause photopea IT'S designed to be compatible with psd files.
I suspect it's incompatible by design with others softwares and not because it's has a poor specification.
C++ is a very powerful, unopinionated language, that gives you a lot of freedom to attack your problem domain the way you best see fit.
If you're writing a networked application, don't use POSIX sockets, which have an API designed for C, go and find a higher level library. If you're parsing complex text formats, don't iterate over buffers with char*'s, go pick up PEGTL. If you're working on graphs, or need to properly index in-memory data, go pick up Boost. If you need a GUI, go pick up Qt.
It's extremely common in C++, due to the lack of a universal package management solution, for people to try and "muddle through" and do shit themselves when it's far outside their core competency.
At one of my last employers, the core product was parsing JSON with std::regex, simply because they couldn't be bothered to integrate a JSON library (which can be done header-only).
But, it does its job well enough: allowing the user to send and receive packets from the network.
If you're writing a networked application that works, chances are you either don't give a sh*t what API to use as long as it lets you send and receive packets (and thus you go with the relatively portable POSIX sockets (at least for Linux / WinSock2 on Windows)), or you use a lower-level API (probably proprietary) to reduce syscall overhead / get more control.
If you're parsing text, chances are all you need is fread() to read in the next chunk from a file, and from this you'll build a "next_byte()" function and then a "next_token()" function on top.
(I've done a lot of network code as well as parsing code, and the I/O API is among the least of my concerns).
All these fancy bottom-up kitchen sink libraries implementing "proper abstractions" or whatever do not provide any value past being able to be combined to form barely working and un-fixable applications where you will pull your hair out when you actually need some control over what's happening.
For something better, you'll need exactly this from external libraries: a clean programmatic (function call) interface that gives you control at a reasonable level of abstraction.
What features do you allude to if I may ask?
In any case, one can't solve any of these "problems" by abstracting over this API ;-)
Putting aside the super obvious problem that there's no common way to use them asynchronously across platforms, and that file descriptors are the wrong abstraction for TCP connections, they are riddled with more obscure issues:
- Linger behavior varies by platform
- Even simple non-blocking behavior varies by platform.
- Common options like enabling TCP keep-alives, or setting buffer sizes, varies by platform.
- More often than not, in modern times, you also want TLS... and that's not available portability across platforms either, and is a whole new awful API to learn (if you choose to use OpenSSL directly).
- No RAII, means resource leaks (in C++).
Using the raw BSD sockets APIs as a starting point for any portable application in 2021 is fucking insane. There's a reason why Python has the 'asyncio' module now and Go has the net module and goroutines.
I'd expect you can easily code one backend per supported platform since the backend specific code can start out (and most likely, stay) fairly minimal, like 100 lines or so.
> Using the raw BSD sockets APIs as a starting point for any portable application in 2021 is fucking insane
I started a Linux POSIX sockets "embedded" server project in 2019 using BSD sockets API (TCP) that is rock-solid even though it has some critical low-latency components in the data path (~10ms).
I also worked on a Windows GUI project in 2020 using WinSock2 (TCP). Then I did several experimental projects on Linux POSIX sockets in 2021, building reliable streams on top of UDP. The platform is not that important, I used non-blocking sockets and moved from recvmsg()/sendmsg() to recvmmsg()/sendmmsg() as an optimization, which is maybe 20 lines more code on the backend.
I wasted several months with the wrong approaches on Windows first. I used WinSock2 with IOCP (asynchronous completion ports) and tried to be super clever with multi-threaded designs (roughly thread-per-connection models) and lots of synchronization, even going into "Fiber" approaches with custom scheduling.
That's all wrong, and I/O is very simple. You place buffers at the connections, then you pump data to/from the buffers on a regular basis. You write plain, simple, procedural code, no threading or any other cleverness needed. All you have to do, just like with files or any other I/O, is get rid of the expectation that you can write "nice" non-blocking code in any way. You just don't do that, it won't work out (expect for scripts / batch programs).
I don't see a reason why the story with TLS should be any different (never tried though). It should just be a component that you put between the network buffers and your application code. Something arrives from the network, you shove it to the TLS module. Something arrives from the TLS module, you shove it to the network.
> No RAII, means resource leaks (in C++).
Don't worry - it's just the same as with file descriptors or most other resources. If you're declaring them inline in a stack, something is wrong. Usually there should be exactly one place in the codebase where you're creating / accepting sockets, and one place where you're closing them. There's really nothing to worry about. There's so much C++ RAII zealotry and resource leaking FUD in the wild, but with a systematic appraoch there's little that can go wrong, plus the code will be so much better structured for out.
Using sockets in a synchronous fashion is one way to block for an indefinite period of time. Once a TCP connection is established, there are failure modes where nothing will notify you that the connection has been lost until you try to write(), and even then after minutes in the worst case. Using sockets without timeouts is nuts. The BSD sockets API doesn't give you timeouts.
>I wasted several months with the wrong approaches on Windows first. I used WinSock2 with IOCP (asynchronous completion ports)
If you'd used Boost ASIO you'd have gotten Windows IOCP under the covers for free.
I honestly don't see an argument here. Defaulting to these low level primitive APIs is an act of hubris. Boost has HTTP, TLS and Websockets as well, all under the same async I/o model. Even HTTP/2 is available under asio via nghttp2
you need to either read() or write() on a connection to be informed that the connection was terminated or half-closed. My server application works perfectly, it reacts immediately to any state change. Did not require any special code, just monitor the read and write ends, which is what one does anyway. (Yep, this is API specific behaviour of course, but it's the only sane approach IMO, since the termination event must be sent in a synchronization with the actual channel interaction).
Of course, if you're not checking for updates on both directions (read + write) because you're blocked on some blocking interface (either on the same socket or different I/O port or computation), your server won't react. The API is not to fault. The mistake was to write blocking code.
That is the difference between dirty batch scripts and systems programming.
write() won't fail until after a bunch of TCP re-transmit timeouts have passed.
TCP keepalives can help but you have to enable them and, as I said before, doing so is different on different platforms.
Honestly, if you're doing anything remotely interactive or latency sensitive on the same thread as network I/O you need to go async.
Of course you can get timeouts (using select() or any other standard event notification mechanism), and most importantly you can easily get non-blocking socket reads/writes, I did just that.
> If you'd used Boost ASIO you'd have gotten Windows IOCP under the covers for free.
Well, I got Windows IOCP without the covers. Even better, since now I can integrate all IOCP parts in my application, and don't have to separate the ones that are covered (or might be? hard to see when covered, right?) by library A from those that are covered by library B.
But I'd like to see first whether IOCP is strictly needed anyway, synchronous non-blocking reads/writes might give you more than enough performance for most cases.
> Boost has HTTP, TLS and Websockets as well
I don't use Boost on principle. Maybe some of these libraries are usable, but boost is a community of architecture astronauts. Another reason is that I avoid C++ if possible.
> Defaulting to these low level primitive APIs is an act of hubris.
BSD sockets is not low level, if anything it is too high-level. As said, it allows you to send and receive packets. What more could you want? Anything else is snakeoil.
Update: Yep, this seems to be some overarchitected junk that leads to unmaintainable messes: https://www.boost.org/doc/libs/1_75_0/doc/html/boost_asio/ov...
The basic primitive, receiving new updates, is not readily available. Instead, you're encouraged to do callback handlers, leading to temporal coupling and ravioli code.
All in the name of optimizing for short syntax in toy examples. Look, how much you can do in just 5 lines with automatically inferred types, and pray the RAII! (Nevermind that anything moderately complex will require twice the normal amount of code just to unwrap all the insanity).
That's just it, there's no such thing as a 'standard event notification system'. select() is terrible for performance, and all the best options are different on every single platform.
> Instead, you're encouraged to do callback handlers, leading to temporal coupling and ravioli code.
Callbacks are the simplest primitive for async code. If you're not comfortable with them then you're not going to go far with async I/O. Not to mention, ASIO also supports futures and coroutines.
select() is just fine for simple cases, but of course it has some known problems, such as MAX_FD. There are better APIs, and ultimately it was learned that ringbuffers between the user-process and the kernel (that remove the need for system calls) as an implementation of asynchronous I/O are a good idea. I.e. IOCP, io_uring, etc.
Often you don't need any of these APIs at all - in a system with a constant ("stochastic") load you don't really need any kind of event waiting system. Instead, you can process all incoming messages every N milliseconds or so.
> Callbacks are the simplest primitive for async code.
No, the simplest thing is to just use a plain old buffer. See e.g. the IOCP API or just any regular buffer code. One side pushes the message to a buffer. Some (arbitrary) time later, the other side (potentially, but not necessarily a different thread) pulls the message from the buffer and handles it.
It's just buffers, buffers are all that is needed, and buffers plainly are the best way to solve all issues related to event handling. No fancy abstract template insanity, no weird generic resource handling systems, no complex scheduling systems, not even a need to declare any kind of event handling function or interface. Just place a few statically allocated buffers at the connection points where threads of execution (OS threads, but also hardware / network etc.) meet.
Callbacks are of course theoretically equivalent, since they can be made to do the same thing as buffers. You can trivially write a callback that only pushes the message to a buffer. In practice, the difference is significant because lots of callback boilerplate is created and temporal coupling (i.e. same thread, same code path) between enqueuing a message and handling the message is encouraged. This results in a lot of overly complex code, including custom green thread runtimes. I've seen it, I've tried to do the same, I've seen others try to do the same. It turns out to be a very, very bad idea, resulting in the creation of a whole parallel universe with separate green threads I/O implementations.
This is what the term "Callback Hell" was invented for.
Look at Windows Fibers API, it's widely recognized to be a dead end. You will find some good post-mortem material on that topic on the internet.
I'm not saying to use sockets in a synchronous fashion (i.e. blocking I/O). That would, of course, potentially block the thread indefinitely.
"Plain, simple, procedural" does not imply "blocking I/O". What I mean is to use no fancy types, no callbacks, no crazy automatic scheduling magic. Very simply, there is nothing special required to handle events. Just a buffer.
See slide 17 on the WWDC 2017 session.
Similarly on the UWP/WinRT based APIs, and on Android the NDK doesn't see the network APIs that are only exposed via Java APIs.
This is quite wrong, a Wasm program can overflow internal buffers due to a missing bounds check and access unrelated data as a result. See HEARTBLEED for a case where this created a very real vulnerability. The Wasm safe sandbox only protects the boundary with the rest of the system.
* Safe call stack (opaque / managed by the VM, and so uncorruptible).
* Safe control flow (no jumps to unexpected places).
* Safe(r) indirect calls (only methods in the table can be called, and the signature is verified).
However, wasm also lacks a few things, like the ability to write-protect static data (see "Everything Old is New Again: Binary Security of WebAssembly"). Future wasm proposals will hopefully address those things.
I have been interested in security for a long time. The number of security vulnerabilities that have been caused by insecure memory management problems is really huge. Some people will probably claim that this is not a problem with modern C++ because it can remedy this problems. But this assumes that the programmers know all the possible pitfalls. With respect to security, the problem is that when there is only a single weakness in your system it might become a point of attack. With a language like C++ there are many possible weaknesses that simply do not exist in memory safe languages.
This looks like a general criticism of using C++ which has nothing to do with the topic of this post. You're free to criticize, but this criticism alone brings absolutely nothing constructive to the conversation and only serves to incite more useless "have you considered writing this in Rust?" conversations. You're not even suggesting what you think they should've used instead.
> "I have been interested in security for a long time"
Here's a tip for you then: security is not an absolute, and things usually aren't as black or white as you might think. Take a moment to consider the fact that C++ is one of only a small handful of languages with which everything around you has been built for the last 30+ years. Do you know something all of those other engineers don't already know? Otherwise, humility goes a long way.
However, wasm has a very clear sandboxing boundary. The ability of an exploit to escape the sandbox is very small if you are careful there.
IIUC the task here is a user that wants to parse their own files. For that, I think wasm's sandboxing (if used properly) is very useful. Especially since in this case it runs on the web and so we also have the browser's additional isolation (a sandboxed process).
Memory safety is incredibly important, but there isn't a simple answer in the space of tradeoffs, at least not for tasks like this. (For things like running an executable on bare metal that parses arbitrary inputs, obviously things are very different!)
So now think what might happen, when not used properly.
Some form of bounds checking should have been part of the design, like memory tagging.
Section 2.5 of this paper has a good discussion on this: https://cr.yp.to/qmail/qmailsec-20071101.pdf
For years people have been yelling: "It's broken because it's written in C/C++". That same "attacks" was made to promote Java 20 years ago.
Sure, maybe they could have picked a memory safe language, but they didn't. Perhaps because they know C++ and doing the same project in a language they're just learning would result in a ton of other bugs. They even write that they hired a few brilliant C++ programmers, so chances are that they know how to safely handle memory in C++.
I think it's a reasonable point, and a step or two above "just use Rust".
There is no-one on HN who doesn't know that C++ has historic memory management difficulties.