Hacker News new | past | comments | ask | show | jobs | submit | skepticATX's comments login

You have hit on something that really bothers me about recent AGI discourse. It’s common to claim that “all” researchers agree that AGI is imminent, and yet when you dive into these claims “all” is a subset of researchers that excludes everyone in academia, people like Yann, and others.

So the statement becomes tautological “all researchers who believe that AGI is imminent believe that AGI is imminent”.

And of course, OpenAI and the other labs don’t perform actual science any longer (if science requires some sort of public sharing of information), so they win every disagreement by claiming that if you could only see what they have behind closed doors, you’d become a true believer.


Doesn't OpenAI explicitly have a "definition" of AGI that's just "it makes some money"?

>You have hit on something that really bothers me about recent AGI discourse. It’s common to claim that “all” researchers agree that AGI is imminent, and yet when you dive into these claims “all” is a subset of researchers that excludes everyone in academia, people like Yann, and others.

When the old gang at Open ai was together, Sutskever, not Sam was easily the most hypey of them all. And if you ask Norvig today, AGI is already here. 2 months ago, Lecun said he believes AGI could be here in 5 to 10 years and this is supposed to be the skeptic. This is the kind of thing i'm talking about. The idea that it's just the non academics caught in the hype is just blatantly false.

No, it doesn't have to be literally everybody to make the point.


Here's why I know that OpenAI is stuck in a hype cycle. For all of 2024, the cry from employees was "PhD level models are coming this year; just imagine what you can do when everyone has PhD level intelligence at their beck and call". And, indeed, PhD level models did arrive...if you consider GPQA to be a benchmark that is particularly meaningful in the real world. Why should I take this year's pronouncements seriously, given this?

OpenAI is what you get when you take Goodhart's Law to the extreme. They are so focused on benchmarks that they are completely blind to the rate of progress that actual matters (hint...it's not model capability in a vacuum).

Yann indeed does believe that AGI will arrive in a decade, but the important thing is that he is honest that this is an uncertain estimate and is based off of extrapolation.


Interestingly, there seems to be no actual government involvement aside from the announcement taking place at the White House. It all seems to be private money.

Government enforcing or laxing/fast tracking regulations and permits can kill or propel even a 100B project, and thus can be thought as having its own value on the scale of the given project’s monetary investment, especially in the case of a will/favor/whim-based government instead of a hard rules based deep state one.

Isn't that a state and local-level thing, though? I can't imagine that there is much federal permitting in building a data center, unless it is powered by a nuclear reactor.

> Isn't that a state and local-level thing

Build it on federal land.

> unless it is powered by a nuclear reactor

From what I’m hearing, this is in play. (If I were in nuclear, I’d find a way to get Greenpeace to protest nuclear power in a way that Trump sees it.)


Yeah but the linked article makes it seem like the current, one-day-old, administration is responsible for the whole thing.

The article also mentions that this all started last year.

Trump just tore up Biden's AI safety bill, so this is OpenAI's thank-you - let Trump take some credit

Note sure if the downvoters realize that Trump did in fact just tear up Biden's AI safety bill/order.

https://www.reuters.com/technology/artificial-intelligence/t...


He delayed enforcement of it for 75 days while they take time to interpret the law.

It's even mentioned in the article!

> Still, the regulatory outlook for AI remains somewhat uncertain as Trump on Monday overturned the 2023 order signed by then-President Joe Biden to create safety standards and watermarking of AI-generated content, among other goals, in hopes of putting guardrails on the technology’s possible risks to national security and economic well-being.


Why are corporations announcing business deals from the White House? There doesn’t seem to be any public ownership/benefit here, aside from potential job creation. Which could be significant. But the American public doesn’t seem to gain anything from this new company.

We are currently witnessing the merging of government and corporations. It was bad before but the process is accelerating now.

I think there’s a word for that.

Can you please not perpetuate flamewars or use HN for political battle? Your account has unfortunately been doing this repeatedly lately. It's not what this site is for, and destroys what it is for.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


Absolutely dang. I’m sorry for causing you any grief.

Appreciated!

[flagged]


If you keep posting in the flamewar style we're going to have to ban you again.

https://news.ycombinator.com/newsguidelines.html


Then you need to ban me again, because I have no idea how I violated the guidelines any more than the OP submission did. My comment was a digression along relevant, imperative and inoffensive lines that were introduced by the source. I am not the only person raising this point, I am not inviting extremist interpretations, and I'm not misrepresenting the consequences presented in the article. For years, Hacker News has tolerated pugilist insults being hurled at the EU with the sole purpose of starting ideological flamewars. Now that someone steps up to accuse America of a double-standard, it's offensive and insensitive? I really don't get it. What culture are you even trying to foster, at this point? Certainly not one that gratifies an audience's curiosity.

If you want to threaten people with the guidelines, then they have to be interpretable. The current "politics is bad except when it's not" rationale is not going to age with grace - it's going to create a series of unsustainable and conflicting precedents that only get worse as America's political landscape further deteriorates. Take a stance on it and stand your ground, people will not listen if your guidelines "maybe" enforce something or provide "probably" off-topic criteria.


Your GP comment, like the one you were replying to, was just ideological flamebait, without substance or information. We want thoughtful conversation here, not people taking swipes at enemies.

https://news.ycombinator.com/newsguidelines.html

> For years, Hacker News has tolerated pugilist insults being hurled at

Other people breaking the site guidelines doesn't make it ok for you to do it (I don't mean you personally, of course, but anyone who comments here). Otherwise we just end up in a downward spiral (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...).

If you see a post that ought to have been moderated but hasn't been, the likeliest explanation is that we didn't see it (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...). You can help by flagging it or emailing us at hn@ycombinator.com.


there's some pretty good quotes about that by Mussolini. Things are getting bleak at an incredible pace.

Judging by the reactions it seems fascism has become very popular around here too lately.

The original quote I was referring to:

Fascism should more appropriately be called Corporatism because it is a merger of state and corporate power. - Benito Mussolini


[flagged]


You make broad assumptions about someone, talk down to them, and then claim they need to grow up?

E: parent edited their comment.


Can you elaborate on what you mean?

Sure, tl;dr: Coastals are in a cultural bubble and Orange Man Bad™ is not literally Hitler/Mussolini/{insert-favorite-despot}.

Weird question. Business deals are announced by politicians all the time, especially on overseas trips. Just an example:

https://boeing.mediaroom.com/2015-04-10-Presidents-Varela-Ob...


This isn't an overseas trip though. It's a private partnership announced by the sitting president in the Roosevelt room, literally across the hall from the oval office. I don't know how unprecedented that truly is, but it certainly feels unusual.

I thought the business prop for AI was that it eliminates jobs?

It will. The short-term sale is that it will create thousands of temporary jobs, and long-term reduce hundreds of thousands of jobs, while handing the savings to stock holdings and moving wealth to the stockholders.

Looks on pace to eliminate every human job over 10 years.

What is the hard limiting factor constraining software and robots from replacing any human job in that time span? Lots of limitations of current technology, but all seem likely to be solved within that timeframe.


What data to you have to support such a claim?

From Zuckerberg, for example:

>> "a lot of the code in our apps and including the AI that we generate, is actually going to be built by AI engineers instead of people engineers."

https://www.entrepreneur.com/business-news/meta-developing-a...

Ikea's been doing this for a while:

>> Ingka says it has trained 8,500 call centre workers as interior design advisers since 2021, while Billie - launched the same year with a name inspired by IKEA's Billy bookcase range - has handled 47% of customers' queries to call centres over the past two years.

https://www.reuters.com/technology/ikea-bets-remote-interior...


By your own admission, Ikea eliminated 0 jobs and you gave no number for Meta.

Do you expect all companies to retrain? Do you expect CEOs to be wrong? Do you expect AI to stay the same, get better, or get worse? I never made the claim that new jobs will NOT be made, that is yet to be seen, but jobs will be lost to AI.

https://www.theguardian.com/business/2023/may/18/bt-cut-jobs...

>> “For a company like BT there is a huge opportunity to use AI to be more efficient,” he said. “There is a sort of 10,000 reduction from that sort of automated digitisation, we will be a huge beneficiary of AI. I believe generative AI is a huge leap forward; yes, we have to be careful, but it is a massive change.”

Goldman Sacs:

https://www.gspublishing.com/content/research/en/reports/202...

>> Extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300mn full-time jobs to automation.


I'm on your side, but there's two readings of these reports:

1) "We are serious, this is going to happen."

2) "AI is big right now so if we hype it we might get some money!"


I do not have any hard data from 10 years from now.

My speculation is based on not seeing any constraints that will block progress of machine intelligence from reaching those capabilities within 10 years.

Also, Kurzweil's predictions from early 2000s have been eerily prescience and this is the time frame he predicted for the Singularity.


It's foreign investment money into the US. Softbank and MGX are foreign and presumably stumping up much of the cash.

> Why are corporations announcing business deals from the White House?

You're answering your own question:

> potential job creation. Which could be significant


The US is now officially a full on oiligarchy. It always was one, it's just that the powers that be don't care to hide it anymore and are flaunting that they have the power.

For profit? I don't understand what's complicated about this.

This is my question too, but I haven't seen a journalist ask it yet. My baseless theory: Trump has promised them some kind of antitrust protections in the form of legislation to be written & passed at a later date.

An announcement of a public AI infrastructure program joined by multiple companies could have been a monumental announcement. This one just looks like three big companies getting permission to make one big one.


Easier: Trump likely committed that the federal agencies wouldn't slow roll regulatory approval (for power, for EIS, etc.).

Ellison stated explicitly that this would be "impossible" without Trump.

Masa stated that this (new investment level?) wouldn't be happening had Trump not won, and that the new investment level was decided yesterday.

I know everyone wants to see something nefarious here, but simplest explanation is that the federal government for next four years is expected to be significantly less hostile to private investment, and - shocker - that yields increased private investment.


That is a better one. I don't know why three rich guys investing in a new company would result in a slowness that Trump could fix, though, and a promise to rush or sidestep regulatory approval still sounds nefarious.

Lots of politicians announce major investments in their area.

If the announced spending target is true, this will be a strategic project for the US exceeding Biden's stimulus acts in scale. I think it would be pretty normal in any country to have highest-level involvement for projects like this. For example, Tesla has a much smaller revenue than this and Chancellor Olaf Scholz was still present when they opened their Gigafactory near Berlin.

It’s interesting to me that a certain type of person is so susceptible to buying into this fable of wokeness, especially when it pertains to universities. Almost like there is a woke mind virus, but it’s not infecting the people they think it is.

I attended university in the mid 2010s, so close to peak “wokeness”, and I never witnessed or heard of anything like what pg is describing. In my experience it was totally fine to hold just about any political/ethical view as long as you were a decent human being to your fellow classmates. There certainly was no political correctness police forcing us to assimilate.


The popular perception, especially in certain circles, is that there's been a rash of "cancellations" and extensive banning of, especially, outside speakers on college campuses, and also to some extent professors, accompanied by large and successful movements there to accomplish those outcomes.

In fact, there are so comically few cases of any of that that the couple real-ish ones are always cited by those advancing that position, plus a handful that really, really aren't that sort of thing at all (always look up the full story, 100% of the time they omit context that totally reframes what was happening, this phenomenon is more reliable than most things in life).

Real data exist on things like speakers' appearances at schools being cancelled, and it's most fair to say that the trend there is it's gone from "damn near never happens" to "still damn near never happens". And it's not because controversial right-wing sorts, which we may presume would be the most likely to be banned, aren't even trying to speak on campuses when e.g. invited by friendly organizations—they are, and frequently do.

The entire phenomenon is extremely close to being imaginary. That's why you, actually being there and not just going by social media and pop-political-book and talk radio and podcast "vibes", didn't see it.


On YouTube, watch the Evergreen State College 3-part documentary by Bret Weinstein. This is much more common than you think, through your anecdote, unfortunately. Granted, this happened in 2017, so a few years after your time in college, but I would argue "peak wokeness" sits between 2016 to today, in large part due to Trump's first election win.

1. https://www.youtube.com/watch?v=FH2WeWgcSMk

2. https://www.youtube.com/watch?v=A0W9QbkX8Cs

3. https://www.youtube.com/watch?v=2vyBLCqyUes

This should make anyone's skin crawl with the way this college's faculty and staff were treated, and the childish behavior of the students to allow this to happen. This gives a reason why "college kids" are no longer considered adults.


[flagged]


> "genital mutilation of children (gender affirming surgery)"

In the past 4 years in the USA there have been:

- roughly 14.4 million children born, half of them are boys (7.2 million) and 57% of those circumcised. 4.1 million non-consenting genital mutilation surgeries on people who didn't ask for them, mostly infants.

- 4160 breast removal surgeries in minors under 17.5 years old on people who did ask for them, mostly teens.

- 660 phalloplasties in the same group.

We should definitely wonder why Republicans are fine with four million non-consensual genital mutilation surgeries every year mostly on infants, but against a thousand times smaller number of surgeries mostly teens willingly asking for them. We should wonder this in the context of Republicans pushing back against legislation raising the minimum marriage age:

- https://www.independent.co.uk/news/world/americas/louisiana-... - "If they’re both 16 or 15 and having a baby why wouldn’t we want them to get married?" - said representative Nancy Landry, a Republican from Lafayette

- "The West Virginia bill is an outright ban on all marriages under 18. When the House advanced it to the Senate with a resounding 84 votes in support, just over 12 Republicans voted against it" ; ""The only thing it's going to do is cause harm and trouble in young people's lives," Harrison County Delegate Keith Marple, a Republican and the lone person to speak against the state bill" - https://www.newsweek.com/republicans-make-case-child-marriag...

i.e. Republicans being fine with 15 year olds "making their own choices" when it comes to marriage.

> "stating pronouns as a performative act" ; "Continue to deny that this worldview exists, and you will continue losing elections."

This is the United States where you stand up every day in school and performatively pledge allegiance to a flag, yes? Where you stop strangers in the street to "thank them for their service"? How are you so annoyed about someone putting "he/him" next to their name (but not about them putting captain/corporal/major/doctor/reverend next to their name), and as a response you vote for a man who admits sexual assault, has been convicted of federal crimes, lies about his experience, knowledge and credentials, spent $141,000,000 of your money playing golf - mostly at his own golf clubs, used the presidency to (illegally!) promote Goya products, nepotistically sent his own children as official US representatives to meetings? A president who performatively attends church for photo shoots but doesn't regularly attend church for prayer?

It's this kind of behaviour which gives rise to the jokes "the Right will eat a shit sandwich if it means the left will catch a whiff of their breath" and which makes a mockery of the claims that it's all the left's fault; the Right is fixated on trivial bullshit, arguing for the right to be able to lie and be jerks without being fact checked or facing any consequences, without a sense of proportion of different events, obsessed with being angry about the left's feelings and calling them snowflakes, while choosing who to vote for because a film character gets black skin instead of white skin.


Great catch. Super disappointing that AI companies continue to do things like this. It’s a great result either way but predictably the excitement is focused on the jump from o1, which is now in question.


To me it's very frustrating because such little caveats make benchmarks less reliable. Implicitly, benchmarks are no different from tests in that someone/something who scores high on a benchmark/test should be able to generalize that knowledge out into the real world.

While that is true with humans taking tests, it's not really true with AIs evaluating on benchmarks.

SWE-bench is a great example. Claude Sonnet can get something like a 50% on verified, whereas I think I might be able to score a 20-25%? So, Claude is a better programmer than me.

Except that isn't really true. Claude can still make a lot of clumsy mistakes. I wouldn't even say these are junior engineer mistakes. I've used it for creative programming tasks and have found one example where it tried to use a library written for d3js for a p5js programming example. The confusion is kind of understandable, but it's also a really dumb mistake.

Some very simple explanations, the models were probably overfitted to a degree on Python given its popularity in AI/ML work, and SWE-bench is all Python. Also, the underlying Github issues are quite old, so they probably contaminated the training data and the models have simply memorized the answers.

Or maybe benchmarks are just bad at measuring intelligence in general.

Regardless, every time a model beats a benchmark I'm annoyed by the fact that I have no clue whatsoever how much this actually translates into real world performance. Did OpenAI/Anthropic/Google actually create something that will automate wide swathes of the software engineering profession? Or did they create the world's most knowledgeable junior engineer?


> Some very simple explanations, the models were probably overfitted to a degree on Python given its popularity in AI/ML work, and SWE-bench is all Python. Also, the underlying Github issues are quite old, so they probably contaminated the training data and the models have simply memorized the answers.

My understanding is that it works by checking if the proposed solution passes test-cases included in the original (human) PR. This seems to present some problems too, because there are surely ways to write code that passes the tests but would fail human review for one reason or another. It would be interesting to not only see the pass rate but also the rate at which the proposed solutions are preferred to the original ones (preferably evaluated by a human but even an LLM comparing the two solutions would be interesting).


If I recall correctly the authors of the benchmark did mention on Twitter that for certain issues models will submit an answer that technically passes the test but is kind of questionable, so yeah, good point.


I felt this same way as image generation was rapidly improving, but I've been caught by surprise and impressed with how resilient we have been in the face of it.

Turns out it's surprisingly, at least for me, to tune out the slop. Some platforms will fall victim to it (Google image search, for one), but new platforms will spring up to take their place.


What frustrates me the most about OpenAI is that as recently as this summer they were talking non-stop about how gigantic $100 billion models are all you need for AGI and that it’s just a matter of time until we reach this scale. And if you didn’t see this you’re a simpleton who doesn’t understand how exponential curves work.

And then all of the sudden o1 comes out and the narrative from them has shifted entirely. “Obviously massive models aren’t enough to get to AGI, but all we have to do is scale up inference time compute and we’ll get there!” And if you don’t see this you’re just a simpleton.

I wish that OpenAI was called out for this shift more often. Because I haven’t heard even one of their employees acknowledge this. At some point you have to just ignore them until they start actually publishing science that supports their beliefs, but that won’t happen because it doesn’t generate revenue.


> how gigantic $100 billion models are all you need for AGI and that it’s just a matter of time until we reach this scale

> Obviously massive models aren’t enough to get to AGI, but all we have to do is scale up inference time compute and we’ll get there!”

Corporate wants you to tell the difference between these two pictures...

Obviously the latter was a step required to make former work. Always has been.


> that won’t happen because it doesn’t generate revenue.

OpenAI made real progress towards a computational understanding of human language and cognition. I'm sorry they have become a for-profit entity (the paperwork lags behind reality, of course). A fiduciary duty does not serve humanity. The quality and credibility of their communications have fallen dramatically.


> how gigantic $100 billion models are all you need for AGI and that it’s just a matter of time until we reach this scale. And if you didn’t see this you’re a simpleton who doesn’t understand how exponential curves work.

OpenAI made no such claim. LLM stans on the internet definitely made such claim, that the Stargate project would be AGI and whatnot. But, like crypto bros, GPT hyperfans are just constantly deluded/lying so you shouldn't project their claims onto the corporations they simp for.

That being said, Anthropic's CEO made a claim closer to what you're saying, that a 10-100 billion dollars model would be better than a human in almost every way.

https://www.itpro.com/technology/artificial-intelligence/dol...


America gave a trillion dollars out so a lot of people can have 1500 dollars. We have enough money, and we are all not going to live forever. I don’t know what the hold up is.


I don't remember them saying "$100 billion models are all you need for AGI." Don't suppose you have a link?


It’s not an exact figure, but plenty of employees claimed that after we train a model with 2-3 orders of magnitude of compute over GPT-4 we’d reach AGI, which puts us at $10-$100 billion.

See Situational Awareness for one example of this.


Two years ago Altman made clear that progress needs more than model scale / parameter count to scale.

https://www.reddit.com/r/mlscaling/comments/12mktym/sam_altm...

This summer he made a comment saying "we can make GPTs 5,6,7 more powerful" and someone editorialized it as being "via parameter count/scale" which he didn't say.


The loudest opinions surface. I wouldn’t take lack of public sympathy to mean that the average American condones this type of behavior.

I am no more sympathetic than I am towards the many people killed each day in the US, but I think the assassination and the subsequent justification of it because he was a small cog in the system is abhorrent. And I hope that the majority of the country feels the same way.


Whether or not the assassination is justified, I don't think it makes sense to call him a small cog. He's the leader of one of the largest health insurance companies in America.


Literally one of the biggest cogs in this machine.


What assassination would be "justified"?

This tragedy in the same city that could potentially throw the book at a Daniel Penny...


> What assassination would be "justified"?

I think it depends where ones sympathies lay.


Assassination assumes some premeditation.

I can at least grasp killing in some sort of immediate self-defense situation.

However, assassination seems a slippery slope to some anarchy that is unlikely to please anyone this side of Hell.


As much as I hate to point it out, it’s not so simple.

He had a legally enforceable mandate to maximize shareholder value. There’s some wiggle room in how that’s accomplished, but not as much as it seems from the outside.

That does make him, much as I hate to say it, a small cog.


It’s funny to me that so often on HN we have discussions about making ethical choices in who we work for and which projects we contribute to, yet the sentiment on this CEO is honestly the most sympathetic I have seen in any comment section

He chose to take the role. He profited from it massively and did more to perpetuate it than any other single individual at the company.

If we were talking about a developer who was writing software for drone bombs wiping out families, yeah maybe that’s a cog. But still there would be no shortage of judgement on that choice of profession

It’s plainly obvious that insurance companies are not good faith actors working to help people. They lobbied for this system, they are the beneficiaries of it, and they are reaping what they’ve sown


> He had a legally enforceable mandate to maximize shareholder value.

[citation needed]

There is no legal basis for this:

* https://corpgov.law.harvard.edu/2012/06/26/the-shareholder-v...

* https://www.washingtonpost.com/news/wonk/wp/2013/09/09/how-t...

* https://evonomics.com/maximizing-shareholder-value-dumbest-i...

It is simply one view that just happened to become popular during the Reagan years and has continued on:

* https://en.wikipedia.org/wiki/Friedman_doctrine

And while we're at it, shareholders are not the owners of a corporation:

* https://www.currentaffairs.org/news/2021/12/who-actually-own...


But from the assassins point of view, the system is abstract (therefore unshootable) while Brian Thompson was a man (therefore shootable).


Wow, it really is a shame he was forced into this job and had no agency.


Nature abhors a vacuum. If not this dude, would’ve been someone else.


There is no legal mandate to maximize shareholder value. That’s just an excuse used to justify sociopathic behavior.

https://www.investopedia.com/terms/s/shareholder-value.asp


But no less deserving of being executed. His decisions led to the deaths of thousands. Justice — in the true sense of the word — demands an answer for this. If the systems we have in place won’t do it, then extrajudicial means are justified. And encouraging to see, frankly.

Just because his crimes were legal does not mean he should not face punishment.


Justice is subjective. What is a crime is subjective.

Hence we have laws - an eternal, never perfect project to find an agreed definition of justice.

Abandoning centuries of precedent of law happens in some places from time to time and they’re not places you’d live by choice.


> The loudest opinions surface. I wouldn’t take lack of public sympathy to mean that the average American condones this type of behavior.

I'm not sure I agree with you there. The indifference in some quarters, celebration in others, across either side of the political divide insofar as I've seen, leads me to believe that in some cases, the well of sympathy is running dry – or, worse, has run dry.


For the jury to convict you need all jurors to agree.

From what I've seen, 99% of feedback is of "scumbag deserved it" flavor.


Elon and Vivek have no authority to close a federal agency, and Trump doesn’t either.

We’re currently in the bloviating stage of this election cycle; once attention dies down I fully expect DOGE to achieve very little and to die a slow death.

The sad part of all of this is that the government could absolutely be more efficient than it currently is, while still providing the same services. But that’d take serious thought and consensus building, which the incoming administration has no desire to engage in.


> I fully expect DOGE to achieve very little

There will be a website on Day 1 that will get overloaded with traffic and then it'll be soon forgotten about.


>Elon and Vivek have no authority to close a federal agency, and Trump doesn’t either.

You're right but the Republican-led House and Senate absolutely do have the power to both do that and grant these guys the power to do it.


I think it’ll keep improving as more folks move over. This is just a temporary artifact of X refugees being the first to migrate.


I’ll check it again in a year. Their “algorithm” is another polarization device, however. If you click or like anything, your feed fills with extremely similar things. Usually, that makes recovery from a closed mindset impossible.


That's why it's cool that you can bring your own algorithm, for example there's this preset which will give you a feed of your "quiet followers", people who rarely post: https://bsky.app/profile/did:plc:vpkhqolt662uhesyj6nxm7ys/fe...


When has a single social thing in human history improved with more folks moving over (beyond a small, early threshold)?


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: