Hacker News new | past | comments | ask | show | jobs | submit login
Docker is vulnerable to a symlink-race attack (seclists.org)
203 points by eyberg on May 28, 2019 | hide | past | favorite | 79 comments



So from a reading of the advisory and pull request, this seems to affect a specific set of scenarios, where a malicious image is running (attacker has gained access either to the running container, or compromised the image before distribution) and then a process goes to copy data out of the running container (for example by using docker cp).

Not sure if there are other scenarios where this would hit as well.

One to be aware of, but as with most vulnerabilities, good to understand how it can be exploited, when you're assessing mitigations...


Many services allow uploading arbitrary images. This is certainly a threat they should mitigate against in their sandboxing strategies.


Would an arbitrary image upload alone allow exploitation of this, or would it require an operation on the host along the lines of a 'docker cp' as well?


No, the vulnerability is within the API for `docker cp`, specifically.


The only currently known and exploitable API is with `docker cp`, but FollowSymlinkInScope is used all over the place. Unfortunately, fixing FollowSymlinkInScope requires redesigning the API and then redesigning all the callers so they stop passing around path strings blindly and instead pass around handles (which are O_PATH fds).

But, as I mentioned in TFA, the plan is to rework https://github.com/cyphar/filepath-securejoin to have a sane API that detects attacks on older kernels while using the new kernel bits (once merged).


I’m not entirely sure, as I could only read 50% of the article’s text on my mobile phone...


This is a known attack for quite sometime. The Apache Web Server, which I love/hate, has for years never really been 100% secure because of issues with how the Linux kernel handles FollowSymLinks and SymLinksIfOwnersMatch. You can purchase special distributions of Linux that are patched against this vulnerability, and also I believe this gentleman [1] has released an OS patch that sort of fixes the issue.

Anyway I switched to Caddy [2] even though I deeply love Apache and it has an amazing history behind it.

[1] https://twitter.com/hanno

[2] https://caddyserver.com/


The vulnerability is solved pretty easily by doing your ownership checks _after_ you've opened (but not served) the file. At least, that's how I did it [1]. Despite our pleas and submitted patches, Apache wasn't interested in fixing it upstream.

[1] https://github.com/bluehost/apache-symlink-patch


Small correction -- you'd want to O_PATH, then verify, and then do a real open (by doing a re-open through /proc/self/fd). Sometimes, tricking you into opening a file is sufficient to cause problems.


Caddy is brilliant. Configuration is so simple and easy compared to Apache / Nginx.


Thank you!! These comments made my day. Feel free to open an issue on GitHub if you have any ideas for how to improve it even more, since the sky's the limit for Caddy 2.

https://github.com/mholt/caddy/issues


Have you tried Traefik ?


Are you serious? Their kubernetes setup guide is longer than War and Peace

https://docs.traefik.io/v1.4/user-guide/kubernetes/


Not sure what that has to do with anything.


Different use-cases.


I think symlink attacks could, at least in some situations, be prevented by a Boolean function (perhaps in the kernel) which validates a path. It returns true if the path is immune to symlink attacks, false otherwise. Immune to symlink attacks means this: the path is walked, and the permissions of each directory are checked to make sure that it isn't open for anyone to create a symlink (anyone whose effective ID is not either root, or the caller). If the path is relative, then the current working directory is prepended to it for the purposes of checking. (A nuance here that has to be taken into account is sticky-bit directories like /tmp.)

If the path contains symlinks, then we have to validate that the real parent directory of the symlink target doesn't allow that target to be replaced. E.g. we are following /foo/bar and bar is a symlink to /xyzzy/pop. If /xyzzy is writable to adversaries, pop can be replaced by a malicious symlink even if pop itself isn't writable to other users.

Anyway, the idea is that if a path we would like to operate on has any components vulnerable to manipulation, we fail the entire operation.

Things can be simplified if we canonicalize the path (so it is free of symlinks) but that is often undesirable. Software should keep the paths it has been given as-is; the symlink abstraction belongs to the user and should be respected.


Or you can use the open + O_PATH + *at syscall family which can be used to get a handle on a resolved directory relative to which you can manipulate with no re-traversal happening for different operations on that handle.

This combo exists exactly to avoid these kinds of issues.

Another way would be temporarily joining the container's mount namespace to obtain the source handle. But that can't really be done in go since goroutines don't play nicely with per-thread operations.

Edit: After looking through the go standard library it seems that there is an impedance mismatch. Go just does not expose the necessary pieces to do this properly. A dedicated docker-cp tool in C or Rust could probably handle this better. I could be wrong though, maybe it's just not part of the stdlib.


Just using O_PATH won't completely save you. O_PATH can definitely help, especially if you actually check that the path is what you think through readlink(/proc/self/fd/$n), but doing it safely is quite a bit harder than just using *at(2). As mentioned in TFA, I am working on some kernel patches (which will probably result in an openat2(2)) which allow for restriction of path resolution so you can block symlink crossings entirely or force symlinks to resolve to the dirfd you passed.


Yeah, I have read about the precursors of those patches. But this has been simmering for a long time.

The point is that one could do this correctly today with the right rituals. openat2 wouldn't save you if you were still doing plain realpath+open across a security boundary even though openat+procfs or setns are available.

The new syscall implementation would still need a fallback impl. for older kernels after all.


> The new syscall implementation would still need a fallback impl. for older kernels after all.

Yup, this is why I'm planning on getting a sane API available in <https://github.com/cyphar/filepath-securejoin> which projects can use so that the correct thing is done with both old and new kernels. Right now it has a (slighly) improved version of the code Docker has, but I'm rewriting it.

It should be noted that there are lots of examples of interfaces which are incredibly hard to make safe without openat2 -- such as O_CREAT.


https://github.com/google/capsicum-linux would basically render this entire class of bug impossible if used correctly.

It would probably require a redesign of how docker interacts with the host filesystem though, and obviously relies on something that isn't yet in the mainline Linux.


It's certainly possible to call nsenter in Go but even so, unlike other namespaces, mount namespaces affect the entire process rather than threads, and the thread must not have launched any threads yet at the time that nsenter is called.

That said, docker does support `docker exec` which gives you a shell inside the container's namespaces and cgroups. I'd imagine they could do something similar and just not call exec once they've entered the container. This would be similar to calling `docker exec $containerid cat /path/to/file`


The problem is that doing this for every file operation becomes ludicrously expensive -- in Go you cannot use vfork and the only (safe) fork primitive is fork+exec. You could have a surrogate process in the container but then the code is significantly more complex (all of the operations need to be fully implemented in the surrogate process and you'll need to figure out a safe way of doing IPC without the container being able to attack it).

You also definitely wouldn't want to be running container runtime code inside all of the namespaces in the container -- this is hard to do safely and should be avoided (there are at least two CVEs related to the joining phase of just runc found in the past few years).


So, `docker exec <container id> cat /path/to/file` is definitely something that works, is as safe as `docker exec`, and doesn't involve running any runtime code inside the container. The expense is that you have cat buffering the data and then the runtime reading the data from a pipe and re-buffering that data. Is that really ludicrously expensive? It doesn't seem any worse than SCP, which is just SSH executing the SCP binary as the command and communicating over the stdin/stdout channels.

Doing it with cat is just an example, since cat may not be present inside the container. Instead of cat, and in the absence of fork-without-exec in go, you could execute a small C program injected similarly to `/dev/init` (or via memfd_create) that uses sendfd over a unix socket created by socketpair to pass back a file descriptor to the runtime so that the data is only buffered once.


A privileged user in the container could ptrace(2) the process and start messing with its output. If you have an IPC protocol (like the sendfd you bring up later) then you've now opened windows for container runtime attacks. Yes, you could double (and triple) check the fd is not malicious but so many programs already don't do this properly -- depending on it being done properly is not an ideal solution.

So, you don't want to join all the namespaces, only the mount namespace. But if you join the monut namespace then MS_MOVE might start messing with your resolution. So really, what you wnt to do is to just pivot_root(2) into the container's rootfs -- this is what "docker cp" already does today and it has a bug because it pivot_root(2)s into an attacker-controlled path. And all of these solutions will require a fork+exec with Go.

> Is that really ludicrously expensive?

If you're doing it once for every single filesystem syscall (every stat(2), readlink(2), open(2), getdents(2)) and so on then yes it gets very expensive. As I mentioned, the other solution is to run entire operations inside a pivot_root(2) but then your runtime code gets more complicated (Docker already does this and it's already hard enough to understand how things like chrootarchive actually work at the end of the day). Not to mention that (as this bug shows) even if you run inside a pivot_root(2) you might make a mistake that entirely negates the security benefits.


To be clear, docker exec is not currently vulnerable to this attack, right?

Currently, docker exec (somehow safely) fully enters the container's namespaces and cgroups, then exec's a command inside the container. My suggestion was basically to have a statically compiled C binary that executes in the fully untrusted context, which things can ptrace and manipulate all they want. The thought was that the C binary would open the file descriptor from inside the untrusted context so that it is incapable of doing anything privileged and then send the file descriptor back over the inherited unix socket via sendfd. I'd imagine the only way this could be vulnerable is if sendfd is vulnerable somehow since this means 100% of the path resolution happens from a fully isolated context.

The performance argument makes plenty of sense, but it sounds like it'd be solvable by just doing a classic tar pipe where tar (or similar) is running in the fully untrusted context and writing its output to a pipe (with no unix sockets involved). You'd just need to get that statically compiled tar binary into the container, similar to how `/dev/init` is done. Would this be unreasonable? `kubectl cp` is already doing an actual tar pipe via docker exec, the missing bit is that it fails if tar does not exist inside the container, so you'd need to inject it in. This would fully removed the complexity of chrootarchive and any path checking, and you'd be able to rely entirely on the security constraints of docker exec.


My point wasn't that docker exec is vulnerable, it's that you were to write a script like:

  % docker exec ctr tar cf - some/path | tar xf -
it would be vulnerable to attack, because the container process could ptrace(2) the tar process and then change its output to cause the host-side tar to start overwriting host files.

My point is that you have to be careful if you're designing a system where the security depends on running a process inside the container and trusting its output -- ideally you wouldn't run any process inside the container at all so that an attacking process can have no impact on it. And that's true even if we assume that the "tar" is not a binary that the container has direct control of.

This concern also applies if you aren't using tar and are instead using some other technique. Hell, even if you don't have an IPC system you could still probably attack a process that is just trying to do a single open(2) call -- you just need to make it pass a different fd.


My argument is that the kernel gives us namespaces, seccomp, selinux, apparmor, etc for isolation and attempting to implement all of the path resolution and permission checking from a privileged context outside of the container defeats all of that and requires reimplementing all of those guards from userspace, which feels futile. By using tar, you're left with serialized path strings and file contents rather than file descriptors, and it should be far easier to sanitize those strings than deal with the linux filesystem API.

I definitely recognize that the container process could ptrace the tar process, and with kubectl cp, it's even directly using whatever tar binary is in the container so tar could easily be malicious from the start, but what it can never do is break out of the container onto the node when the tar file is not being extracted onto the node using the docker daemon's prvileges, which is extremely important for multi-tenant environments.

If you executed your example command as root on the node, then yes, a vulnerability in the node's tar implementation could allow a malicious tar file to take over the node at extraction time, but tar does guard against this by default, as do standard posix user permissions: the tar extraction can happen in a completely unprivileged context.

I do view tar's extraction as a valid attack surface since modern tar implementations are complex, however, that would require a tar CVE and there's no reason that `docker cp`'s output target handling is any less vulnerable to the same problems. I really think the most important thing to guard against is at input time.


"kubectl cp" has had security bugs in the past[1] that are very in-line with what I just outlined (I didn't know this beforehand -- but I would've guessed it was vulnerable if they hadn't seen this issue before). In fact the fix in [1] doesn't look entirely complete to me -- it seems to me you could further mess with the output.

I agree that we should use security in depth (and all of those kernel facilities are great), but actually joining the container itself is not a good idea -- you need to treat it as the enemy. I am not in favour of implementing them all in userspace, this is why I'm working on new kernel facilities to restrict path resolution.

[1]: https://github.com/kubernetes/kubernetes/pull/75037


You don't necessarily need IPC. You only have to spawn a single-threaded child process that can call setns to switch back and forth between mount namespaces. You can also obtain file descriptors in batches to reduce the overhead.

I wonder how an O_PATH handle from a different namespace behaves once you switch back. If *at lookups are performed under its original namespace you only have to obtain it once.


O_PATH operations are performed under the original namespace, and you don't even need to join the mount namespace -- you could just do an O_PATH of /proc/$pid/root which pipes you directly to the root of the mount namespace.

We came up with this idea before on LKML and I'm trying to remember what the issue with this solution was. It's definitely better than the current method by Docker (and actually, I might be able to get this to work within Docker more simply). It wouldn't work with rootless containers, but you could fix that somewhat trivially by switching to the userns of the container's pid1. Since /proc/self doesn't exist, obvious attacks through that such as /proc/self/root won't work.

There is a somewhat esoteric problem, which is that you could trick the process into opening an O_PATH stashed by a bad container process -- but right now there is a pretty big flaw in how the kernel handles this problem anyway that I'm also trying to solve upstream.


Sorry, I only just remembered what the issue with this solution was. Absolute symlink components in the path will "escape" the root. So it actually doesn't help overall -- you still need to verify after you've done the open. My kernel patchset for RESOLVE_IN_ROOT will scope absolute paths too -- so you definitely could combine it with /proc/$pid/root but that's not enough by itself.


Ok, so getting files from the root O_PATH without verification is not enough. But that still leaves joining the mount namespace to obtain individual fds, which can be done safely and in bulk.


But that starting point has to be initially checked somehow. Say we are user "bob" we open "/foo/bar/xyzzy" (for later use with openat), and, say, "bar" is writable to "mallory", then that isn't a path we can trust in the first place.

O_PATH is a Linux Kernel 4.x thing; can't find it in POSIX. Won't compile on non-Linux POSIX operating systems or older kernels. It seems to be an optimization; if we omit it, we just get a "fatter" file descriptor that takes more cycles to set up.


You can perform the verification after having obtained the dirfd. E.g. by doing a reverse lookup through procfs to see where it actually points. Or by walking the .., making sure you never leave the container root.

It'll certainly be easier to get right with the newly proposed syscalls. But you can also get it right with the current ones.

> O_PATH is a Linux Kernel 4.x thing; can't find it in POSIX.

I don't think that is relevant to docker which relies on many linux-specific APIs anyway.


I think another way might be to have a set of file descriptors you know correspond to safe paths before doing anything, as you say (they could be dirfds which let you traverse downwards). Then when the `docker cp` command is executed, it does not open any new files, it simply looks up the corresponding fd from a table mapping paths to fds. If the file no longer exists, or it doesn't have a fd matching that path, then error out. The construction could be based on where you're copying to/from on the host system.

That way, even if the symlink somehow gets resolved to a bad path, it will refuse to read it because it does not exist from the point of view of the `docker cp` command. I.e. make the command use a "principle of least authority" where it cannot even see files outside of a certain set of paths, let alone be tricked into opening them by a symlink.


The problem is checking for it and then operating on it. The check and the operation would need to prevent anything else from operating on it during that time.

This is why the the proposed patch to docker is pausing the container during the copy.


> The check and the operation would need to prevent anything else from operating on it during that time.

I don't think you understand. If we verify that the absolutized path has sane permissions from top to bottom, then nothing else can operate on it; nothing else that is not either the superuser, or our own user ID. I.e. no untrusted security context.

(If you think that the requirement is literally "nothing else", such as a different thread in the same application in exactly the same security context, then that's a whole different set of goalposts in another soccer field.)


The bug reported here is because Docker sanitizes the symlinks within a path to ensure that nothing is trying link to something outside a given path, then it operates on the filesystem. This filesystem is shared with the containerized process. So after Docker sanitizes the path, a malicious container process can do bad things to the path and Docker will assume that everything is still ok, the process will then gain read/write access to host files.

I think a non-privileged user (or the use of user namespaces) would limit the scope of the attack based on permissions (though you have to make sure there is no suid binaries on the host as well... or use no-new-privileges for the container), however the attack still exists.


> I think a non-privileged user (or the use of user namespaces) would limit the scope of the attack based on permissions

I don't think so -- Docker does all archive operations from the context of Docker (so, as root). Obviously with rootless Docker this is different, but I highly doubt anyone has started using it yet.

EDIT: Obviously also if Docker has an AppArmor profile applied or restrictive SELinux labels then it will also be limited by that.


You are right, I was thinking about this in the wrong context.


"Docker is vulnerable" is also a valid statement at present, hopefully it will improve by the forces of $$$ and tech fashionablism.


It won't. Docker already has too much inertia to be stopped by a single CVE. The "it's just another bug of an actively supported piece of software, we'll just update tomorrow" behaviour will click.


Not that big of a deal after you read it, though



This gave me an existential crisis.

We're told not to build our own since that's a waste of time and our version won't be as good.

At the same time we're mocked for not understanding what we're using.

What's the happy medium?


There is no happy medium. Due to its complexity, the whole ecosystem of modern software engineering is inherently dysfunctional. Being able to understand existing solutions and to create your own solutions that aren't broken toys is both essential.


"A novice was trying to fix a broken Lisp machine by turning the power off and on.

Knight, seeing what the student was doing, spoke sternly: “You cannot fix a machine by just power-cycling it with no understanding of what is going wrong.”

Knight turned the machine off and on.

The machine worked."


"Don't build your own" does not imply "just use what's there blindly". You should have an understanding of the tools you're using, but of course if you try to audit every line of code you include into your app (God help you if you're using node and npm/yarn) then you'll never get any work done.

At some point you simply have to trust other developers, but don't be afraid to build your own, especially if it's smaller functionality (looking at you left-pad ...).

Dependency management is a skill all developers should learn but it's not one that is actively taught or encouraged. Most (including myself) only really start learning once you've felt the bite of that one bad dependency, or fought through dependency upgrade hell.

In short, make sure you're asking yourself "Do I really need this as a dependency or can I build something myself?"


> (God help you if you're using node and npm/yarn)

Try to audit gcc if you’re using C :)


There’s CompCert, if you really want to go that far…

Then there’s the question of auditing the hardware you’re running on ;)


You only have to audit the binary produced, not gcc.


and how do you do that? - btw. without reading bit per bit you still have a problem since you can't audit a binary with a full trust chain, i.e. the auditing tool needs to be audited aswell, etc...


If you'd like to venture further into this rabbit hole I can recommend "Preventing the Collapse of Civilization": https://www.youtube.com/watch?v=pW-SOdj4Kkk&list=LL6MdPYF0rD...

And "The Thirty Million Line Problem": https://www.youtube.com/watch?v=kZRE7HIO3vk


Not caring what other people mock us for is a very happy medium.


The "don't build your own" advice has a more specific piece of wisdom embedded.

When you build and use a non-trivial piece of software, you also create a large body of non-trivial design and reliability issues, which will take you many iterations and possibly rewrites to get right. Mature software has generally had time to address those issues, and what isn't addressed is better understood by the users and implementors.

When you roll you own, you have to ask: compared to the thing I'm replacing, do I understand the problem domain well enough to anticipate the issues?


I definitely understand the sentiment of "don't build your own" when it comes to cryptography, but do people get mocked for not using Docker and instead using a traditional containerless setup? (Well, I guess people get mocked for anything.) Of course you shouldn't reinvent Docker; if you really need containers, use what's there (whether that's Docker or something else) and try to learn how it works, but I think there's more choices than just "use docker containers" or "build your own containers".


We run our core compute in-house (well, in datacenters); our application isn't particularly well suited for running in cloud hosting environments at all. In our environment, we don't use Docker at all.

Well with one exception. There is a cross-compilation build environment for an Intel XScale based-industrial computer which is deployed to several hundred remote locations. The previous developer was fond of working alone unchecked and insisted on creating a (pet snowflake) Docker host and a set of containers into which he installed the SDK for said industrial PC. God only knows where he pulled the original Docker base image from. He spent a year doing this unchecked (amongst a few other things) so that he could pad out his CV with the word "Docker". And after that was done, he left the business.

You start the Docker container and it fairly neatly builds the entire environment and creates a filesystem image which can be written to a CF card and installed in the physical industrial PC's, and handed to the field maintenance team.

My point is - the whole build process for this could run on a bare VM which would be under our regular configuration management. The Docker container and host really provide no benefit in this situation (except perhaps quick start up time for a fresh build environment) and are really just a hassle, because we'd rather not mess with Docker. The integration into our Jenkins instance was a complete nightmare - we spent hours poring over the Docker documentation which we found sub-par, or too new (the Docker version he had installed was ancient by this point) and running afoul of various Docker bugs.

We don't have enough other use cases for Docker to make it worthwhile for the rest of the team to learn in depth at this point. It's still on my TODO list to de-dockerize this build process and nuke that Docker host VM forever.


There is absolutely nothing wrong with building your own for personal projects, especially fun ones. The "don't reinvent the wheel" is more about how if you are ENGINEERING something, you should fully understand standards and best practices, how to use them, any configuration if needed, and composition of tools.

The professional will use and understand tools, it's the enthusiast who builds his own.


I'd say the difference is in what you do with the one you built.

In a number of disciplines, you might learn the basics of building a tool that you use, but you elect to use that knowledge to buy a good tool rather than mastering that problem domain.

So what do you do with the one you built? Can you bring yourself to 'build one to throw away'? We struggle with this mightily. Once we've put effort into something and we have a 'thing' to show for it, we have trouble walking away from the thing. It's our form of hoarding.


I worked at a place that was processing voting for a TV show. The CTO at the time insisted that everything be written in-house so that we would understand how it all worked.

Problem with that is that the code was full of bad documentation, obscure function names, unexpected behaviors and vast areas of missing functionality. Our people writing the software were less capable than the open source equivalents. With the support of some other people I was stumping for stewardship over authorship.

Basically, we should have been allowed to use smaller libraries of Open Source software as long as one or more employees were intimately familiar with the internals of that library. For the one I wanted to use, I already knew it fairly well so I could have reached that level with a month or two of work. Instead I had to keep adding features and bug fixes to our busted piece of crap, or (more often) reimplement functionality in the caller.


For me the happy medium is using the tools to get the job done, even if it entails building from scratch certain parts of the overall system.


It's not something to have a crisis about. We all stand on the shoulders of giants. There's too much out there to understand every facet yourself (for a fun example of this, go read I, Pencil) and have any time left over to actually make anything. Like most things in engineering, and life, it's a trade-off. The only problem here is the smirking know-it-all nature of comics like that.


> We all stand on the shoulders of giants.

Sure! And this does not mean that once we are standing on them we can start to piss on their head. (e.g., the python programmers who despise C, or the matlab programmers who despise fortran). There is a worrying trend of ignorant programmers who dismiss "old-school" or "legacy" systems without realizing that they rely on them every day.

You can mock perl, for example, but then you buy a brand-new macbook pro and unless you run "file /usr/bin/*|grep -i perl|wc -l" you do not realize how much your computer depends on the this language.


> What's the happy medium?

Possibly, not building your own, but investing some time researching how others have built already existing tech?

That helps if what's out there doesn't work for you - either if it breaks (and you have to fix someone else's stuff that you use) or lacks feature-wise (and you _have_ to build your own, either by extending or re-designing).


The happy medium is build your own, do an amazing job, and don't care what some clueless middle manager who secretly hates that you are actually able to write your own is telling you.

Also forget about sleeping or having any social or family life for the next eight years.


> We're told not to build our own since that's a waste of time and our version won't be as good.

We're told not to build our own since we don't understand what we're using


To use things that you understand? It's perfectly possible to read through the documentation/code for the things that you want to use.


There isn't one, we just accept the fragility of our options.


Study what you use of course. Contribute if needed.


Why is MalwareBytes giving a warning for this site?


MalwareBytes is wrong. seclists is a website containing security mailing lists, and CVE mails are a very common sight there.


Because MalwareBytes is scareware?


I'd still like to know why MalwareBytes is warning me about this page. If you can't answer that, why mock someone?


Edit: you’ve hacked your comment after I replied so much that this post no long bares any context to the question it was originally responding to....


And it's not true that MalwareBytes is "scareware". I'm sure the person saying that is simply some 12-year-old who wants to show the Windows user that he runs Linux and is a "l33t h4x0r" or something and knows better than my (commercial) security software.

MalwareBytes is actually reporting that something on that page is trying to make an outbound connection to port 50685 on ip address 45.33.49.119. I don't know what to make of this, but it smells fishy. It's very odd that this program will make a claim if there isn't something going on. I'm not going to risk visiting that page.


45.33.49.119 is the same IP this website is hosted on.

I'm not sure on port 50685 but when I load the page there are no connections on port 50685.


well basically > 49k ports are private/dynamic. which means that if you open a website at 80/443 it will open a port on your computer above 49k, so nothing malicious.


I don’t run nor have ever run MalwareBytes so I couldn’t comment on what’s being reported there but the site itself is one of the few of few I’d trust normally. So either the site has been hacked or MalwareBytes is reporting a false positive.

Are you connecting over HTTP or HTTPS? If the latter, have you checked who issued the SSL certificate?




Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: