Hacker News new | past | comments | ask | show | jobs | submit | coldcode's comments login

I have Photoshop, but I use Affinity Photo for 99% of what I do (make digital art, AP is used for assembly and effects). I use Photoshop for a few special effects, but often it's not worth the effort.


Long ago I sometimes played the organ. There is nothing more amazing than being alone in a large dark church, playing a pipe organ at full volume, feeling the vibrations. It's the original heavy metal. No other instrument can duplicate that feeling.


This is true—but that's something that's not very discoverable. I think getting to mess around with an organ online would pique someone's interest enough to seek out at in-person concert when they find that inevitably their speakers aren't good enough.


"I'll gladly pay you the next four Tuesdays for a hamburger today" does seem like a bad economic concept.


"Training fodder" is a new one.


All the creative output of mankind, reduced to fodder for feeding a hungry probability machine.


All snarfed over by machines of bottomless nom


I've watched many war room in various employers.

At one (20 years ago), they met for six months to determine why our field offices' network connection to the home office was so pathetic and unusable. It was led by the head of networking. After all those meetings, it was decided that all 1000 independent field offices should upgrade their internet to T1 connections. It didn't help. Another six months goes by, and I hear from my connections in networking that the real problem was the head of networking had installed a half-duplex low-speed ethernet card: all 1000 office's data had been going through a pinhole. It was replaced, and suddenly everything was fine again, other than the hole in the office's pockets for an unnecessary upgrade.

No one ever mentioned it publically.


Apple could start with the UK politicians' iCloud backups, I am sure there is a lot of useful compromising information in them. That might cause a change in attitude.


Apple could be the first modern corporation to declare war on a nation-state. They have the resources to set the internet hounds on them & make them absolutely miserable. Looks like the Heinlein/Snowcrash/Stross corporate futures are in play.


Math is such an interesting field. People can work for decades and not make progress, then discover something in a moment of clarity from some seemingly unrelated problem. As a programmer, I don't have that type of patience.


Get yourself a slower compiler ;)

But all kidding aside, you likely will have those moments. Not because you had patience, but because over you career you collected enough knowledge bits in your brain that they'll at some point click together in extremely odd shapes.

Like with maths, if you choose to follow up, this is either a moment of clarity, or the moment you enter crankdom.


Honestly, this happens in a lot of fields, including programming. I often wonder why we spend so much time trying to justify certain research avenues over others and don't let people just research what they find is interesting. I hear you, we should be efficient and not waste money. But is there actually good evidence that we have strong predictive powers here? There's at least strong evidence that dark horses are quite common in the innovation space and exceptionally common in major breakthroughs. So even if we want to primarily focus on funding promising directions there is still good evidence that optimal funding requires funding unpopular ideas.

Which if we think about a lot of this, it should make sense anyways. Just by thinking about optimization theory. It is often quite good to add noise so that your optimization function can escape local minima. We can only travel directly to the global optima if we know exactly where it is. But should we not expect that expert predictive power is much better at pointing to local optima rather than global? (global very likely doesn't exist but that doesn't mean there aren't better optima).

There's also many famous scientists that didn't "spend much time working." I add quotes, because if you're a researcher you'd naturally understand there's no real thing as "not working." There is only active work and inactive work. You're likely consumed by the topics and problems you're trying to solve. So doing things like going on walks, playing your favorite sport, or whatever ends up being beneficial as you can relax and shift between focused and creative modes. But that doesn't happen as much if your boss thinks "working" is staring at the chalkboard. Sometimes it is best to go sit outside and daydream, while other times it is best to hammer your head against that metaphorical chalkboard.

  > I don't have that type of patience.
As for this, patience is a skill. Delayed rewards. Long term rewards are often noisier and more difficult to attribute to their appropriate causes. Even if the long term rewards are substantially greater than the short term, and the timeframe isn't too large, most people prefer the short term. Not just because reward, but because interpretability. As an example, just think about education in of itself. Lots of effort but also lots of reward. Even though the process is very noisy and it is unclear which aspects of education contributed the most to success, it is very clear that there's a strong connection between education and success (does not mean there aren't other pathways to success nor that you are guaranteed to be successful by being educated. It can be easy to conflate these things).


> I often wonder why we spend so much time trying to justify certain research avenues over others and don't let people just research what they find is interesting.

Oh, that's just an artifact of using tax payer money to finance research. If you stopped that, you didn't need to mandate any such justifications.

(Though individual funders could of course try to demand any justification they feel is appropriate for their funds. Just like today.)


  > Oh, that's just an artifact of using tax payer money to finance research.
This is true even if you're at a large company. But you're right that it is not true for most of the old scientists who predominately came from wealthy families. But I don't think we should "stop that" and rather I'm arguing that we should "modify it to reflect reality." Of course, it gets political, but governments (and its people) are highly incentivized to make long term improvements and work at scales and problems that aren't as appropriate for individuals or even companies (companies must work on much smaller timescales).

We really just need to stop getting in our own way. But that requires admitting we are dumb. Even if we're smart that doesn't mean we're not also dumb.

Then again, Musk has enough money he could develop over a dozen CERNs himself and fully fund them indefinitely, a feat so far done only by the agglomerated efforts of a large organization of countries (despite it being a drop in the bucket for their budgets).

I've never understood billionaires and mega billionaires though. Because I'd be setting up as many mega projects I could and give them indefinitely funding because you can't throw your money away faster than it grows. Well... I guess I understand MacKenzie Scott...


> Of course, it gets political, but governments (and its people) are highly incentivized to make long term improvements and work at scales and problems that aren't as appropriate for individuals or even companies (companies must work on much smaller timescales).

What? Governments mostly care about the next election. Companies (and individuals) can plan for the long haul.

> Because I'd be setting up as many mega projects I could and give them indefinitely funding because you can't throw your money away faster than it grows.

Two problems: first, it's very easy to just throw away money for no good outcome. See eg the ISS or the Space Shuttle. Especially if there's no pressure at all.

Not getting in the way at all, also means not getting in the way of charlatans and grifters.

It's a hard problem to separate worthwhile projects from the chaff.

Second, it's very easy even for a billionaire to spend money faster than the make it.

For a simple example, Mark Zuckerberg owns a huge chunk of Facebook (and that's approximately his entire wealth). You can use publicly available stock price data to check how much his investments in the 'Metaverse' have hurt his net worth.


  > You can use publicly available stock price data to check how much [Mark Zuckerberg's] investments in the 'Metaverse' have hurt his net worth.
I assume you're trolling. Because you're talking about the third richest man in the world, who doubled his wealth last year and similarly the year before. Who has sold at least $20bn of those stocks, which alone would result in him still being in the top 100 wealthiest people. What a weird person to use to exemplify your point


Selling stocks doesn't increase your net worth.


> I often wonder why we spend so much time trying to justify certain research avenues over others and don't let people just research what they find is interesting.

I honestly feel this is mostly because of a (possibly well-justified) fear that, were we to not do that, almost all the money would quickly get captured by grifters whose ideas are fake and of no research value, but who were best at convincing they're worth funding. We can't read peoples' hearts, so we need to force out some output we can discriminate on.


I believe you are correct, but at the same time, are we not already giving a lot of money to grifters?

For most research funding, I think the incentives are misaligned for the type of grifters we're trying to prevent, i.e. people just pocketing the money. The money is too low and the barriers are too high. The grifting that does happen is more about metric hacking and is about the status one achieves in academia. Yeah, there's high profile people that make good money on this, but at the same time many in the respective fields are not shocked to find that their papers were faked. There were usually existing accusations. But the structures in place are not well aligned to oust or investigate them. The publish or perish paradigm gives little time for replication experiments and you should come with strong evidence before making accusations. We're also getting better at this, but more awareness and more discovery has not led to these people losing their positions our status. (We even see universities protect them at times) Take this recent plagiarism case[0]. Most authors have quite good citation records and metrics, but how does a work that's so egregious not cause other works to be investigated, it warrants suspicion. We have network graphs of researchers, with their collaborators, but why do we not build these graphs for abusers? Most of the time the abuse is purged behind closed doors and so it even makes it harder to build the networks.

CS is especially egregious for this. I lost a lot of respect for a bunch of my ML researcher peers when they promoted Rabbit. The demo was so egregiously faked and none of the claims made sense given the state of research. But we throw tons of money at these types of problems. We thrive on hype bubbles and don't purge the conmen. Hell, we often reward bad behavior. Like how that intern that got fired from Bytedance for manipulating code won Best Paper and NeurIPS[1]. The HumanEval paper is ridiculously naive (to think you can write "leetcode style problems" and that because you wrote them "by hand" that they won't be spoiled by training on GitHub. You can find identical code to most of the canonical solutions trivially!). There's tons of memes in ML around plagiarism and as best as I can tell all these authors are still doing just fine. How an egregious case doesn't result in a 1 year probation for a conference is beyond me. We suspend students for this academic misconduct but bury it for professors and grad students?

The incentives are all wrong and we need to take a serious look at ourselves. Because we are in fact protecting the grifters. It is just lucky that most people are uninterested in grifting in this domain, though as said, there isn't much incentive to (obviously changing with the prominence in ML).

[0] https://openreview.net/forum?id=cIKQp84vqN

[1] https://www.wired.com/story/bytedance-intern-best-paper-neur...


that's how funding used to work before the bureaucrats took control in the 1990's. it's no coincidence we mow have so much junk research


Back a few hundred years ago you could discover a new element just by boiling your own urine. (Specifically, that's how phosphorus was discovered.)

Nowadays you need intercontinental collaboration between research labs.

And that's not _just_ because of the bureaucrats.


There's still a lot of science to be done at home.

Analyzing data sets and producing good data are bottlenecks.

It's usually less fundamental and more related to recording and analyzing properties of the world though.

You can crunch numbers that CERN and MAST provide.

https://archive.stsci.edu/ https://opendata.cern.ch/

I've gotten into 3D printing, and load and temperature data of different filaments is always appreciated.

Mixing materials together, microscopic images, etc...

I get a lot of value from YouTubers who simple follow a consistent methodology of endurance or break testing products or materials. Tear downs and documentation of internals, performance statistics, etc...

Channels like CNCKitchen or ProjectFarm are excellent citizen scientists for example.


Yes, you can do some of that.

But a lot of low hanging fruit has been picked. (And that's good, that's how progress works.

Compare to how saving an additional life in the US is a lot more expensive than saving one in South Sudan. That's because people in the US have (approximately) already saved all the lives they can save for cheap.)


If you don't fund science, you give the world to other countries, especially China. If you only fund science that makes immediate profits, you never get them, as the foundation does not exist, and the researchers never learn enough to discover anything anyway.

Science, engineering, medicine and technology do not invent themselves. It takes people, sometimes for decades, to make it happen overnight. If you spend nothing, you get nothing. I guess nothing is now the plan.


> If you don't fund science, you give the world to other countries, especially China.

Who are you telling this to, me or the government?


Forget TikTok, I'd rather see a divestment of X.

Twitter was the best idea ever but never made sense as a business; people keep trying to use X as if it still is Twitter, but it isn't. Bluesky is nice but still too small.


XCode's assistant AI suggestions are 99% ridiculous, often don't make sense, don't compile, or have nothing to do with my project. Every once in a while it suggests a line or so that is fine. The biggest issue is I want code completion and it appends extra items that make no sense at all, requiring that I either backspace a bunch, or have to use a mouse to pick from the menu. I haven't used Copilot or one of the others, so don't know if this is normal or Apple's is just stupid.


So not normal. Try Cursor.


As long as the code I'm writing matches the training data it is always correct.


Then it's likely an opportunity for factoring. The ideal is to never have duplicate code


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: