Hacker Newsnew | past | comments | ask | show | jobs | submit | don-code's commentslogin

I'm sort of curious where the law stands on this (I am not a lawyer).

Since it has a license plate on it, it in theory displays some ownership info. Is that enough for me to say, "it's clearly not mine now"? If it didn't, does that give me any right to take something off a public roadway?

Obviously, I know that the letter of the law, and what actually will be enforced, are two different things. Taking something that belongs to CBP would almost definitely be prosecuted in this case, regardless of whether it's legally fair game to do so.

It appears that I can't direct-link to it, but look up case 19S-CR-00528 on public.courts.in.gov - this was a case in which the Supreme Court of Indiana overturned an earlier ruling that removing a GPS monitoring device from your own car, when you weren't aware it was there, was theft.


I think its the same as stealing a bike or a car parked on the street. I don't know the subtleties but I don't think you can presume something is abandoned merely for being left on the street?


This is one of the reasons I hung onto my Treo for so long. It was so much faster to do... well, basically anything that the device was capable of. With the physical keyboard, you actually didn't need to take the stylus out very often, either.

Calling Mark: (power on) (phone key) M-A (send) - hitting the phone key automatically brought up the dialer, which did double duty as contact search.

Adding a new event to the calendar: (power on) (calendar key) (enter) - and just start typing; you could navigate the fields with the up and down arrows.

Opening the calculator: (power on) (home key) C-A (enter) - the launcher was filterable with the keyboard.


Even better, IIRC on the Treo the phone key would turn it on?

I had a Treo 600 and and then 650 from around 2003 until 2007 when the iPhone came out. The 600 was among the best devices I've ever had. Rock solid, did exactly what it said it did. The 650 would crash randomly just sitting there. Not quite as bad as a Windows phone of the era, but a substantial regression.


I had the Treo until 2012; the Android headwinds were blowing full speed at that point.

Before the Treo, I had a VisorPhone. Wonderful device, and fit a specific need (no phones allowed in school - great, I can slide the phone out of the back, and continue to use it as a PDA). The thing that killed the VisorPhone for me was PalmOS 3.5's lack of memory protection, combined with a bug in the SMS app. Anybody sending me an MMS message instantly crashed it, requiring me to pull the batteries. Sometimes I hadn't realized it happened for hours, and missed phone calls. MMS messages (group texts, etc) only became more and more common, and when this became a multiple-times-weekly occurrence, I made a move.


There was an article a few years ago here on HN about "can't be evil" business models, which used Costco as an example. As soon as Costco turns evil, it stops working. https://www.bryanlehrer.com/entries/costco/


Has the site actually been running all this time? I notice that the generator tag says "FrontPage 12" (post-2003), and site has a TLS certificate, which in 1996 it most certainly would not have had.


The current domain registration also dates to 2003 and as someone lower in the thread notes the current owner is connected to "4president.org".

I'm having trouble accessing old snapshots, though. The Internet Archive has one as far back as April 1, 2000, but the snapshot viewer has been giving 503 errors all morning.


I got the snapshot to load and it appears that at that point it was being sat on by scammy domain parkers, complete with promises of scandalous celebrity photos and dick pills.


The bottom of the page says "This Web Site is Presented for Educational Purposes by 4President.org"


> Why don't cloud providers have a nice way for tools like TF to query the current state of the infra? Maybe they do and I'm doing IaC wrong?

This is technically how Ansible works. Here's an extensive list of modules that deploy resources in various public clouds: https://docs.ansible.com/projects/ansible/2.9/modules/list_o...

That said, it looks like Ansible has deprecated those modules, and that seems fair - I haven't actually heard of anyone deploying infrastructure in a public cloud with Ansible in years. It found its niche is image generation and systems management. Almost all modern tools like Terraform, Pulumi, and even CloudFormation (albeit under the hood) keep a state file.


I think there are active maintained modules https://docs.ansible.com/projects/ansible/latest/collections...

At work we use Ansible to setup Route53 records for infrastructure hosted elsewhere. Not sure if that counts as infrastructure.


Being familiar with the hijinks that Steve Wozniak pulled with the switched mode power supply in the Apple II (1977), I was curious about how the author solved for this piece:

> It also needed three supply voltages; +5v, +12v, and -5v. That made it tough to power it from a single-voltage power supply or battery.

According to https://www.allaboutcircuits.com/news/mc34063-the-switching-..., the solution - the MC34063 - isn't _exactly_ a design that's contemporary with the Altair or other 1970s micros, but was introduced in the early 1980s. That would put it closer in age to the Commodore 64 which, in spite of its much smaller size, still indeed does not fit in an Altoids tin.

Very cool project nonetheless!


Woz only did the digital designs. The Apple II power supply was designed by Rod Holt.

https://www.righto.com/2012/02/apple-didnt-revolutionize-pow...


I had the same question! When you open an Altair or IMSAI the giant, heavy linear power supply really stands out to modern eyes.



A few months ago I spoke with the frontman of a local Boston band from the 1980s, who recently re-released a single with the help of AI. The source material was a compact cassette tape from a demo, found in a drawer. He used AI to isolate what would've been individual tracks from the recording, then cleaned them up individually, without AI's help.

Does that constitute "wholly or in substantial part"? Would the track have existed were it not for having that easy route into re-mastering?

I understand what Bandcamp's trying to do here, and I generally am in support of removing what we'd recognize as "fully AI-generated music", but there are legitimate creative uses of AI that might come to wholly or substantially encompass the output. It's difficult to draw any lines line on a creative work, by just by nature of the work being creative.

(For those interested - check out O Positive's "With You" on the WERS Live at 75 album!)


No I don't think that really qualifies because it's solving an engineering problem. I hang out on an electronic music creators' forum which is stringently anti-AI, but nobody objects to things like stem separation. People are skeptical about AI 'mastering' but don't really object for similar reasons.

What people get mad about is the use of AI to generate whole tracks. Generating rhythms, melodies, harmonies etc via AI isn't greeted warmly either, but electronic musicians generally like experimenting with things like setting up 'wrong' modulation destinations in search of interesting results. I don't think anyone seriously objects to AI-produced elements being selected and repurposed as musical raw material. But this is obviously not happening with complete track generation. It's like playing slot machines but calling yourself a business person.


That's not AI generated at all. Using acoustic models to stem out individual sections from a recording is not creating new material (and I wouldn't even describe that as "AI" despite what I'm sure a lot of the tools offering it want us to believe).


Sorry, that's AI. So is OCR, so is voice recognition, and many other things you probably use and take for granted. I'd suggest you focus on use cases not trying to redefine definitions for an entire area of science and technology based on your own preferences.

Saying "I'm against fully AI generated music" is at least precise, and doesn't throw out detecting cancer along with the AI bandwagon term.


Not AI generated. The ML model isn’t coming up with anything novel^, it’s just converting from one format to another, or extracting data - similar to automatically cropping photos to faces.

It’s still AI, but it’s not the AI system generating something


> Sorry, that's AI. So is OCR, so is voice recognition, and many other things you probably use and take for granted

Have you heard of machine learning?


The current genAI trend is machine learning too so what's the point of this question?


I think the point is that to most people, “AI” has a different meaning than “machine learning”


AI and voice recognition were using "machine learning" for several decades, which is basically just brute force statistics.

ML voice recognition is still far superior to AI-based voice recognition. At its best, Gemini is still less accurate at parsing speech than Dragon Naturally Speaking circa 2000.


> that's AI.

Not if you agree with dictionaries and Wikipedia:

Artificial intelligence (AI) is the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making.


I think it makes some sense to allow leeway for intelligent "signal processing" using AI (separating out individual tracks, clean-up, etc) vs generating new content with AI.

Similarly, say, for video editors, using AI to more intelligently rotoscope (especially with alpha blending in the presence of motion blur - practically impossible to do it manually), would be a great use of AI, removing the non-creative tedium of the process.

It's not clear where the line is though. I was quite impressed with Corridor Crew's (albeit NVidia+Puget-sponsored) video [1] where they photographed dolls, motion-captured human actors moving like the dolls, and transferred the skeletal animation and facial expressions to those dolls using GenAI. Some of it required nontrivial transformative code to accommodate a skeleton to a toy's body type. There's a massive amount of tedium being removed from the creative process by GenAI without sacrificing the core human creative contribution. This feels like it should be allowed -- I think we should attempt to draw clearer lines where there are clearly efficiency gains to be had to have less "creative" uses be more socially acceptable.

[1]: https://youtu.be/DSRrSO7QhXY


It may be a tedious job to spend days rotoscoping but I personally know people who get paid to do that, and as soon as AI can do it, they will have to go find other work (which they already do, on the side, because the writing is on the wall, but there's a ton of people worldwide who do this kind of work, and that's not the only process being delegated to AI).


So, I'm not pretending that certain kinds of jobs aren't going to be obsoleted. Lots of responsibilities went by the wayside as a lot of things were automated with technology and algorithms - I mean, this is not just an AI thing. But I also see it as many people not even executing certain creative visions that would be out of reach due to the mechanical (not creative) cost of doing things. That's where Jevons Paradox really shines and I do think that's where the explosion will happen. Of the (very few) people I know who do editing and have to rotoscope, rotoscoping is one of the things they really don't enjoy, but they do it anyway.


That's why we don't use machine to dig ditches, we use spoons, because this planet is a work program.


We also used to pay people to manually copy books. It's not a good argument.


That feels legit to me. We have been using software to isolate individual instruments from a recording for a while.


We've been using software to fix grammar for a long time, and AI does it also. The question is valid: if I get an LLM to fix a few grammar errors in my own writing, am I ripping anyone off? We can't just dismiss the question just because grammar fixing is something we did without machine-learning AI trained on vast numbers of other people's texts.

The output does depend on training works, even if you are just fixing grammar errors. But the document is obviously a derivative of your own writing and almost nothing else. A grammatic concept learned from vast numbers of worsk is probably not a copyright infringment.

Similarly, a part extraction concept learned from training sets such as pairs of mixed and unmixed music, and then applied to someone's own music to do accurate part extraction, does not seem like an infringing use. All features of the result are identifiable as coming from the original mixed audio; you cannot identify infringing passages in it added by the AI --- and if such a thing happened, it would be an unwanted artifact leading us to re-do the part extraction in some other way to avoid it.


The question doesn't feel legit to me though. The OP somehow found the one justifiable example among a sea of AI slop.

Justifiable because there were some filters. That may not even have been "AI". They may have been some deterministic algorithms that the software maker has to label "AI" because they otherwise think it won't sell...


I've done audio engineering as a hobby. Even a decade ago, verbiage like "ai noise reduction" was very common. Of course that was RNNs, not transformers. But I think they have a valid point. I googled and found this 2017 post about iZotope integrating machine learning: https://www.izotope.com/en/learn/what-the-machine-learning-i...


I don't know. I think there's a tendency to look at things as pure or impure, as all black or all white. If it was touched by AI, it's AI. If not, it's pure.

I'm not familiar with the music business, but I'm a Sunday photographer. There's an initiative to label pictures that had "generative ai" applied. I'm not a professional, so I don't really have a horse in this race. I also enjoy the creations of some dude I follow on Instagram which are clearly labelled as produced by AI.

But in between, the situation isn't as clear cut. As photographers, we used to do "spot removal", with pretty big "spots" for ages [0]. You just had to manually select the "offending" "spot", try to source some other part which looked close enough. Now you can use "object removal" which does a great job with things like grass and whatnot but is "generative ai". These are labelled AI, and they are.

I can understand someone arguing that what required a lot of skill is now more accessible. And I guess that's true? But that just sounds elitist.

So what's the issue with "AI"? Do you enjoy the result? Great! Do you hate it? Move to the next one. Does that particular "artist" produce only thins you hate? Skip them!

--

[0] my point is about "artistic" pictures, not photojournalism or similar where "what was" is of utmost importance. Note that even in those cases, selective cropping only requires your feet and nobody would label as "edited". But I specifically don't want to open that can of worms.


Seems to me like he just wanted to advertise that song.


I think any line is necessarily going to be arbitrary, a blanket ban on any ML model being used in production would be plainly impossible -- using Ozone's EQ assistant or having a Markov chain generate your chord progressions could also count towards "in substantial part", but are equally hard to object to.

But we also live with arbitrary lines elsewhere, as with spam filters? People generally don't want ads for free Viagra, and spam filters remain the default without making "no marketing emails" a hard rule.

The problem isn't that music Transformers can't be used artfully [1] but that they allow a kind of spam which distribution services aren't really equipped to handle. In 2009, nobody would have stopped you from producing albums en masse with the generative tech of the day, Microsoft's Songsmith [2], but you would have had a hard time selling them - but hands-off distribution services like DistroKid and improved models makes music spam much more viable now than it was previously.

[1] I personally find neural synthesis models like RAVE autoencoders nifty: https://youtu.be/HC0L5ZH21kw

[2] https://en.wikipedia.org/wiki/Microsoft_Research_Songsmith as ...demoed? in https://www.youtube.com/watch?v=mg0l7f25bhU


>The source material was a compact cassette tape from a demo, found in a drawer.

Was this demo his, or someone else’s IP? If he is cleaning up or modifying his own property, not a lot of people have a problem with that.

If it is someone else’s work, then modifying with AI doesn’t change that.

I think they just don’t want AI generated works that only mash up the work of other artists, which is the default of AI generated stuff.


If the Beatles can use AI to restore a poorly recorded cassette tape of John Lennon playing the piano and singing at his dinner table, I think it's alright if other bands do it, too.

https://en.wikipedia.org/wiki/Now_and_Then_(Beatles_song)


> It's difficult to draw any lines line on a creative work, by just by nature of the work being creative.

If you want to be some neutral universal third party sure. If you're OK with taking a position, the arbitrariness actually makes it much easier. You just draw the line you want.

Creativity demands limitation, and those limitations don't have to be justified.


Suppose we have two image-oriented AI's.

One is trained with a set of pairs which match words with images. Vast numbers of images tagged with words.

The other is trained on a set of photographs of exactly the same scene from the same vantage point, but one in daylight and the other at night. Suppose all these images are copyrighted and used without permissions.

With the one AI, we can do word-to-image to generate an image. Clearly, that is a derived work of the training set of images; it's just interpolating among them based on the word assocations.

With the other AI, we can take a photograph which we took ourselves in daylight, and generate a night version of the same one. This is not clearly infringing on the training set, even though that output depends on it. We used the set without permission to have the machine extract and learn the concept of diurnal vs. nocturnal appearance of scenes, based on which it is kind of "reimagining" our daytime image as a night time one.

The question of whether AI is stealing material depends exactly on what the training pathway is; what it is that it is learning from the data. Is it learning to just crib, and interpolate, or to glean some general concept that is not protected by copyright: like separating mixed audio into tracks, changing day to night, or whatever.


> With the one AI, we can do word-to-image to generate an image. Clearly, that is a derived work of the training set of images

> The question of whether AI is stealing material depends exactly on what the training pathway is; what it is that it is learning from the data.

No it isn't. The question of whether AI is stealing material has little to do with the training pathway, but everything to do with scale.

To give a very simple example: is your model a trillion parameter model, but you're training it on 1000 images? It's going to memorize.

Is your model a 3 billion parameter model, but you're training it on trillions of images? It's going to generalize because it simply doesn't physically have the capacity to memorize its training data, and assuming you've deduplicated your training dataset it's not going to memorize any single image.

It literally makes no difference whether you'll use the "trained on the same scene but one in daylight and one at night" or "generate the image based on a description" training objective here. Depending on how you pick your hyperparameters you can trivially make either one memorize the training data (i.e. in your words "make it clearly a derived work of the training set of images").


You don't make it clear whether the music on that tape was "generated" "by AI", only that it was post-processed in such a way.


That's not generative AI, just source separation which has existed way before large language models and transformer architecture were big.


The example you present seems fairly straightforward to my intuition, but I think your point is fair.

A harder set of hypotheticals might arise if music production goes the direction that software engineering is heading: “agentic work”, whereby a person is very much involved in the creation of a work, but more by directing an AI agent than by orchestrating a set of non-AI tools.


This is very similar to, "am I ripping people off if I just get the LLM AI to make a few grammar fixes in my own writing?"


Ya, "AI" is too broad a term. This was already possible without "AI" as we know it today, but of course it was still the same idea back then. I get what you're saying, though: would he have bothered if he'd have to have found the right filters/plugins on his own? idunno.


thats sounds more like unsupervised learning via one of the bread and butter clustering algorithms. I guess that is technically AI but its a far cry from the transformers tech thats actually got everyone's underwear in knots.


AI unmixing or denoising is not really generating music AI.


Where does it stop. My dad is a decent guitarist but a poor singer (sadly, I'm even worse). He has written some songs, his own words, some guitar licks or chords as input material and AI turning it into a surprisingly believable finished piece. To me it's basically AI slop but he's putting in a modest amount of effort for the output.


This is exactly where the policy starts to get murky


That's not "generating" the music with AI - that's isolating the tracks of existing music. Probably not generative AI at all, and depending on who you ask, not even AI.

This is why it is to these generative AI companies' benefit that 'AI' becomes a catchall term for everything, from what enemies are programmed to do in video games to a spambot that creates and uploads slop facebook videos on the hour.


One of the neater aspects of HP-UX is that, given the breadth of pre-2000s HP, HP-UX ran on a number of different devices. You can almost (_almost_ - it's a stretch) think of it as a precursor to how Linux proliferated on routers and smartphones.

While you'd _expect_ to find HP-UX racked in a datacenter, you can also find it on workstations, where its proprietary VUE desktop environment eventually morphed into CDE (which, ironically, I've only ever used on Solaris).

It powered at least one early, pre-laptop-form-factor portable PC, the HP Integral. And you can also find it running on oscilloscopes, logic analyzers, and other test equipment from the 80s and 90s.


I really liked VUE so much more. The collaboration with IBM really made CDE more business-like and boring.

I still run 10.20 on my old PARISC box for that reason.


Doing connected work from the subway has gotten much, much easier in the last few years. I attribute that to three things:

1. Cell service has become low-latency. This is very different from "fast", which it has also become! When I started working from the train (on HSPA+), pings in the hundreds of milliseconds were the norm. My first step was usually to SSH to a remote machine, and let just the text lag on me. Nowadays, I can run a Web browser locally without issue.

2. Cell service has, at the same time, become ubiquitous in subway tunnels. When I started, there were some areas that dropped down to EDGE (unusable), and some areas that had no service at all. Now, there is exactly one place on the Boston transit system - Back Bay Station - where I lose cell service.

3. Noise cancelling tech has gotten better. It's not just about noise cancelling headphones: both of my laptops (a 2024 MBP and a ThinkPad P14s) have microphones that can filter out screeching wheels and noisy teenagers quite well. That means I can take meetings without making them miserable for the people on the other end.

These, honestly, are a huge game-changer for me. The ability to take a 30 minute meeting while commuting, where otherwise I would've had to get in early or stay late at work, actually does wonders for my ability to have a life outside of work.


> The ability to take a 30 minute meeting

At the small cost of making everyone around you miserable.


Maybe they mean those meetings when you don't talk much, just listen to everyone else


> 2. Cell service has, at the same time, become ubiquitous in subway tunnels.

Not in New York, unfortunately. All of the stations have cell service, and one tunnel (14th Street L train tunnel under the East River), but everywhere else has no service between stations. It’s an annoying limitation that most cities seem to have fixed by now.


If I rode there I'd consider that a feature, not a limitation.


No one talks on phones anymore anyway. But having data would be nice.


I do get through lots of books this way!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: