Even the anthropomorphism argument doesn't hold up under close scrutiny. When I was in high school I was asked to memorize several poems, including a few that are under copyright today. If I regurgitate one of these poems and present it as my own, this clearly infringes copyright, even if I no longer recall where the poem came from or who wrote it.
How is what OpenAI is doing with NYT stories any different, other than the architecture and substrate of the neural network?
I'm all for policing the outputs of generative models and enforcing copyright on their usage.
I am very much against ruling that their training is infringement.
A model which uses old NYT articles to learn the relationship between words and concepts which turns around and is used to identify potentially falsified research papers for review should not be prevented from existing.
If the model is used to reproduce copyrighted material - by all means the person running it should be liable.
This would create a ML industry around copyright identification as a pre filter before outputting (ironically requiring training on copyrighted material to enforce).
We have a lot of clients who happily use Pulley. I believe they are a YC company themselves. I don't have a dog in this fight. But there are plusses and minuses to these various solutions. Carta is not the be all and end all.
As well as the hurdles pointed out in the article, their Article 15 process (starting here: https://digitalerantrag.ksv.at/Dip/?request=auskunft-nach-ar... ) requires an Austrian mobile phone number. Personally I'd like to exercise my rights under Article 15 (since I am an EU citizen) but I don't have an Austrian phone number.
Subject: Request for Personal Data Access Under GDPR Article 15
Dear KSV1870 Data Protection Officer,
I hope this message finds you well. I am writing to formally request access to all personal data that KSV1870 holds about me, in accordance with my rights under Article 15 of the General Data Protection Regulation (GDPR).
Please note that I am an EU citizen residing in Ireland and have recently encountered difficulties in accessing your web services due to the requirement of an Austrian mobile phone number, which I do not possess. This limitation has prompted me to directly reach out via email to exercise my rights under the GDPR.
To assist you in locating my data, I am providing the following personal details:
Full Name: [Your Full Name]
Date of Birth: [Your Date of Birth]
Address: [Your Address in Ireland]
Email Address: [Your Email Address]
Any other relevant identifying information: [Any Other Relevant Information]
Under GDPR Article 15, I am entitled to receive a copy of all personal data that you hold about me, as well as additional information about how my data is processed. My request includes any data collected, stored, or processed by KSV1870, in both electronic and physical formats.
I would appreciate if you could acknowledge the receipt of this request and provide an estimated timeline for a response. According to GDPR guidelines, you are required to respond within one month of receiving this request.
If you require any further information from my end to facilitate this request, please do not hesitate to contact me at [Your Contact Information].
Thank you for your attention to this matter. I look forward to your prompt response.
Yes, this is happening to me too. Last time I successfully edited a Google Sheet was this morning around 8 AM, and now (6 PM) it's broken. My "storage" page shows 18.86 GB of 28 GB used, and I have no idea what's going on.
Sorry I don't have a solution here, but at least we're not alone.
Honestly, I regret ever signing up for the product that is now "legacy G Suite". It's been a rough experience over the years and there doesn't seem to be an easy way to migrate from this product to a regular Google account. (I looked into this back when they were threatening to discontinue the product.)
> [Boston Dynamics] warned that if the "spectacle" goes ahead, Spot's warranty might be voided, meaning it could not be updated.
Wait, so they reserve the right to void your warranty if you use their product to make art?
I remember there was a case where a company bricked someone's device because they left a bad review[0], but having companies judge the artistic merit of their customers' use cases seems somehow even more dystopian.
We've been shipping a Docker-based app to customers for years, and every now and then one of them runs a security scanner on our images. I have yet to see a scan that isn't a disaster of false positives (for the reasons outlined in the article and more!)
One of the craziest recent examples was a scan using a tool called Twistlock. Many of our images are built from an upstream image that may have outdated apt dependencies, so one of the first things we do is upgrade them. Twistlock flagged _every instance_ of this because "Package binaries should not be altered" (in other words, between subsequent layers in an image). I am baffled how anyone at Twistlock decided that this was a useful thing for their product to detect, or why any Twistlock customer trusts it given issues like this.
> I am baffled how anyone at Twistlock decided that this was a useful thing for their product to detect, or why any Twistlock customer trusts it given issues like this.
If I was injecting something malicious into your containers via updates, this is exactly how I would go about doing it and exactly what would catch it.
What I'm seeing here is that Twistlock and other tools don't reliably do a good job of explaining why something is flagged in a way that's understandable and accessible to developers. Though honestly I've yet to find any approach to informing developers that actually works.
My favorite was giving them a clear link in the error message about why the build was failing and how to fix it.
It flags that because it could indicate someone got onto your system and injected their own code or changed machine instructions at the binary level, which is a pretty common way to get a remote shell.
It is annoying to have to mark false positives, but that's just the nature of the beast when it comes to being thorough about security. More annoying with this check than when you update packages in a container image instead of starting clean is that this same technique is often used to compare hashes of packages managed by an installer versus what is actually on disk, and thus flags every single package in a JIT-compiled language that caches byte code on disk as altered.
It’s because they’re implementing the feature so they can show a CISO a big scary report and say “good thing you paid us - otherwise you wouldn’t have known!”
If they were serious about build errors they could use the built-in features of APT, YUM, etc. to only report binaries which don’t match the canonical distribution’s hashes, as has been standard sysadmin practice for aeons.
I have used Prisma Cloud / twistlock. The tampering detection is only useful for detecting changes to running containers, not for changes to binaries between layers. The latter is just dumb and causes anti-productive false positives like above.
The product I work on is geared towards big corporate IT environments, and I can confirm that this sort of thing is not unusual at all.
A recent support ticket went along the lines of:
Customer: An audit discovered that JDK version X was installed as part of your software. It has a vulnerability and we demand a way to upgrade to JDK X+1 that has the fix.
Our support team: We're already aware of that and the latest point release of our software bundles JDK X+2, which fixes that vulnerability and 2 others. Please upgrade.
Customer: Our compliance team requires JDK X+1. Please provide a way to install this version.
We eventually solved the problem by having them upgrade to the latest major release of our software, which doesn't use Java at all, but it boggles my mind that they wanted a _less_ secure JDK.
After years of being beaten by customers with stories like these, I learnt to treat InfoSec and Compliance teams as finite state machines, particularly at banks and other financial institutions. Learn not to question the sacred spreadsheet, or debate the merits of a request. It's pointless, and you keep rolling your eyes will only end up with you at the optometrist.
Instead, treat compliance like part of your API. Ensure your product delivers on the expected answer, while continuously improving the security of your products in the parts that are not directly visible.
However DO get in writing that the option was offered to them for possible future court battles so that the onus was on them for failed security damages.
Maybe JDK X+1 had gone through a deep and thorough review at some point that got it put on some "OK" list somewhere? And maybe X+2 was too new to have made it through that same deep and thorough review. It makes sense from an auditor's perspective, maybe X+2 has new bugs that X+1 didn't have. They want the good version, not the newest version.
Actually it's super-realistic in practice, especially the JDK, given the short-short-long support duration cadence for JDK releases. e.g. I am totally uninterested in someone telling me I need to use JDK 12 rather than JDK 11: the former is already out of support and the latter will be supported until at least 2026.
OP's story and the article's author are kind of missing the point. These are both simple stories of a vendor failing to meet a [presumably] written requirement: The customer, or regulator, required X, and vendor decided instead to provide Y, and then were dumbfounded when that was deemed unacceptable. OP's vendor went farther, offering Z instead, and the customer again reminded them that X was required. It doesn't really matter if there are better alternatives than X. Those alternatives are not part of the requirement.
Whether Y=X-1 or Z=X+1 is irrelevant. Customer requires X, you provide X or they'll find another software vendor.
Correct! And for the auditor, version X+2 has not been evaluated and certified, so they cannot really "approve" it. When one realizes the auditors are simply doing their job, and the end goal is more or less the same, the going gets much smoother. :-)
This sort of "it's not been vetted and approved" business can get really silly though. Like one company I worked at a couple of decades ago that mandated Windows 95 for employees. IT staff would actually take new machines shipped with Windows 2000, wipe them, and install the corporate Win95 image.
This is something I would totally understand. Many software packages had compatibility issues when moving from 9x-based Windows to NT-based Windows (like expecting they could do things that NT didn't allow), so the last thing you want to some random person complain that their computer is broken. Everyone gets the same system, where the issues are at least semi-known.
A common mistake I've seen in the industry is to "look down" upon the IT staff, which makes it much difficult to get a meaningful conversation going.
Yes, wiping W2K and installing W95 is problematic and insecure, but I always believe in the power of a polite conversation and have had great success in persuading the powers-that-be to change their stance. :-)
Oh, but I was IT staff at the time. I wasn't in charge of desktop support, but I knew the guys. They didn't care for it either, it was policy from a high level in a company that, as a whole, did look down on IT. One of my coworkers actually went through the whole process of talking and negotiating to try to get a Win2K laptop instead of having W95 forced on it. No luck.
And, if anyone has been wondering, yes, there was hardware in the brand-new laptop that W95 didn't support. It "worked" with a generic fallback driver but lost some of the functionality that made the laptop worth buying.
Eh -- for some software packages, maybe. I probably trust that new versions of the JDK are generally better, and probably have fewer security issues, or at least fewer known security issues, but I definitely don't trust every new version of every software library or package. New security bugs are introduced all the time, and if a new version represents a major refactor or a change of maintainer, it also represents a major unknown.
This is true in general. But this also applies to the mandate to upgrade from X to X+1. For most (but not all) software it is fair to assume that a patch version does not represent a major refactor.
If something does go wrong, the people who approved x+2 will be the first ones handed pink slips and the bureaucrats will get a finger wagged at them. That's why it's better to comply and have it noted that x+2 was offered but ultimately poo-poo'd by management.
"but it boggles my mind that they wanted a _less_ secure JDK."
This should not be hard to understand at all.
A lot of things may have changed in those revs, beyond the 'extra patches - those affect systems. It's likely the software was not approved in the new operating environment.
You can just go ahead and use Java 11 on software designed for Java 8, there may be issues.
The 'patch' that go into one rev is what they want - not more.
Most individuals are not qualified to make versioning decisions.
The larger the company, the more at stake.
It's possible it was due to bureaucratic numbness, and the agent probably should have maybe 'checked harder' about the new versions, but wanting a specific version is reasonable depending on the circumstances.
Our Support Team: We will be happy to comply with your request once you sign this liability release form indicating that you want the less-secure X+1 update, and not the more secure X+2 update which we recommend.
(But we're working on it.)