Hacker News new | past | comments | ask | show | jobs | submit | xpose2000's comments login

I think Danny should avoid future interviews with the Verge. heh. But seriously, I just don't like how the tone of the interview came across. Does anyone see Danny yelling in these conversations to warrant the use of the exclamation mark? If anything this makes the Verge look bad.

As for addressing SEO concerns. There is a lot of frustration out there these days by small sites and companies trying to make their way and Google results can be hit or miss. Major publications like CNET, Forbes, CNN etc are purposefully creating content and cramming it with affiliate links to sell crap to the masses. When a major publication writes about something its not an expert in, one has to start to raise eyebrows and wonder... Its painfully obvious. They get away with it because they are huge brands and can rank for anything, so they are abusing their power.

Sean Kaye says it best here: https://twitter.com/SeanDoesLife/status/1716935563075559630

Additionally, I want to mention an obvious manipulative practice that companies seem to be rewarded for, when if anything, should be penalized for.

And that is avoiding the "standard news" syntax of published content via manipulative URLS. Namely, avoiding using /id/date/title-of-post (or something similar) and just using the rootdomain/title-of-post to make it rank higher and seem more important than it is. These pages are not an About page, or Privacy Page, or Terms of Service page. Its manipulation and a shady practice and companies should be penalized for it.


I don't know Danny but as a neutral party he came off awful and not the right person to meet with reporters. He is neither warm or helpful nor entertaining or thoughtful (humble, etc). He comes off as a burned out newspaper columnist.

I think the key problem is he will play with words and what they mean by parsing them in a way to misunderstand the true meaning behind the question and then uses tries to make you feel less. Does he really not understand why writers are saying no one can find anything (translation: Google is showing less pages for keywords searched, less content is being returned and more ads are poluting the results)? He changes it into: millions are searching, I can take a picture of an apple and google will find it. Search results are better he says(in a see you are wrong and stupid for suggesting this). He completely misses the point.. EVERYONE is noticing how bad the results compared to what they were. Sure, millions of searches happen every day still.. but people are unhappy and they see the quality as lower. People don't automatically leave unless there is a reason and place to go to. Google is giving them a good reason.

Google shouldn't have sent someone who could have a thoughtful discussion or an honest discussion or a deceitful but pleasant conversation.

I don't think this guy personally is the reason why things went off the rails but he paints a picture that the search team has their heads in the sand and they are patting themselves on the back with how great results are when everyone can see the emperor has no clothes on.


Gary Illyes and John Mueller (other prominent Google liaisons) are also kind of standoffish. Ginny Marvin is another SEland hire, but isn't as mean spirited, but is still as limited in how helpful responses are. John and Gary commonly mock people asking questions and just parrot Google's speaking points of "just create quality content and we will rank it well" that frequently do not mirror reality. If you have SEO questions, your only other official channel for support are Google forums-- maintained by volunteers outside of Google. That SEOs have no official support from the most prominent search engine (besides ambiguous documentation, ~5 liaisons and forums of dubious value) is laughable. If your site gets deindexed or penalized, good luck finding out why with Google.

With Bing Webmaster tools, I have actually emailed support and gotten a bug fixed within 2 weeks. With Google, your best option is yelling into the abyss.


I know it was a long post I made. But yes, I (and we) recognize people want the results better. I covered this at the end (along with some other parts):

"That said, there’s room to improve. There always is. Search and content can move through cycles. You can have a rise in unhelpful content, and search systems evolve to deal with it. We’re in one of those cycles. I fully recognize people would like to see better search results on Google. I know how hard people within Google Search are working to do this. I’m fortunate to be a part of that. To the degree I can help — which includes better communicating, ensuring that I reflect the humbleness that we — and I feel — I’ll keep improving on myself."


I read the verge and listen to their podcast. They don’t come across as overly sensationalist and seem pretty fair in other interviews. I think Google knows their search product isn’t as useful as it once was. Not sure about the root causes, but that’s just my impression from using it for 15 years.


>They don’t come across as overly sensationalist and seem pretty fair in other interviews.

funnily enough, they are the first site that comes to mind when I think about all those horrible blogspam articles meant to stroke common argumentative points back in the early 2010's. Android vs. IPhone, barely relevant influencer making statement tangentially related to tech, a growing focus away from tech and towards why the tech industry is actually every -ism under the planet, etc.

I hope they got better over the last 7 years or so since I stopped reading most news sides in lieu of Youtubers or searching for specific domain experts or niche, no-nonsense websites.

>I think Google knows their search product isn’t as useful as it once was.

I honestly think the elephant is too big to see the full picture of. I can 100% believe that the search team has some novel tech to really make the best search engine from a technical standpoint. I can also 100% believe that some other team (maybe in ads, maybe even as high as special fellows) inject into that pipeline and add in stuff purely meant for profit, even if results suffer. Or that some other support team does in fact work specifically with big sites to influence bump their SEO.

No one a Google can contain the entire codebase of such a product. It's all to easy to obfrusate such enshittification into it without the well-meaning engineers being any the wiser.


The primary criticism I have of the verge is they often have pretty non-technical people comment on technical things. It comes through hardest on the podcast where Alex Cranz often seems out of her depth


For me, it started around the end of 2018. It seemed like independent blogs and small sites got nuked from orbit, and articles on sites like Medium took precedence.

My take on what happened is that they decimated a good product in the name of "fighting misinformation" by surfacing content mainly from sites that had moderation policies of whatever sorts. Their way of effectively applying the same App Store style moderation across the entire web.

Things seem to have continued sliding downward in the years since. I won't be surprised when AI eats their lunch.


> Namely, avoiding using /id/date/title-of-post (or something similar) and just using the rootdomain/title-of-post to make it rank higher and seem more important than it is.

This causes the small wayward fragments of Library Science curriculum embedded in my brain to quiver with rage.

Bonus points if the tail end of the URL contains what may-or-may-not be a bunch of tracking shit and it's not obvious how much it can be shortened without breaking the link.


I found it very weird that he was interviewed by The Verge in his capacity as a Google employee, with full blessing from the company's comms team and whoever else, and then decided to post a rebuttal of the article on his personal blog with a large disclaimer that the thoughts are his alone and not his employer's.


I get what you're saying, it's weird to interview him as a Google employee. But actually it would be really weird if they didn't include Danny Sullivan in the article in some capacity. Danny Sullivan, over the years, has been so influential and such an influential voice when it comes to Search and SEO. He previously was on the other side, not working for Google.


It's because when I was interviewed, what I said was all speaking officially for Google. You can attribute anything there to the company directly.

My blog post -- I wrote that on my own. No one from the Google communications team reviewed it, approved it, vetted it and so on. That's what I was trying to explain.

That doesn't mean, of course, people won't think it somehow reflects on Google or what I do there. It no doubt will. But that's not quite the same thing as something being an official company statement.


> And that is avoiding the "standard news" syntax of published content via manipulative URLS. Namely, avoiding using /id/date/title-of-post (or something similar) and just using the rootdomain/title-of-post to make it rank higher and seem more important than it is. These pages are not an About page, or Privacy Page, or Terms of Service page. Its manipulation and a shady practice and companies should be penalized for it.

As someone who has spent years manipulating ranking I can tell you this has nothing to do with effecting ranking and is most likely about optics/human readability increasing CTR.

If you have data that shows urls like "/id/date/title-of-post" rank worse than "rootdomain/title-of-post" (which is nearly impossible to accurately measure due to the nature of how things are _really_ ranked) I'd argue that the rankings are related to the CTR rather than the URL structure.

I've explored and tested various URL structures across xxx,xxx domains with effectively equal quality content (using "manipulative" ranking methods and content generation tactics) and there was no measurable difference in ranking.

> These pages are not an About page, or Privacy Page, or Terms of Service page.

No judgement, but this seems like an odd stance to me. You seem to feel there is some sort of established standard in the structure of website pages/hierarchy, particularly one that should have punishments enforced against those who don't abide... Thankfully there is not, if there were then there would have to be some sort of agreement on these things - who is going to make those decisions? Who are those decisions going to be optimal for?

No, to all of that.

> As for addressing SEO concerns. There is a lot of frustration out there these days by small sites and companies trying to make their way and Google results can be hit or miss. Major publications like CNET, Forbes, CNN etc are purposefully creating content and cramming it with affiliate links to sell crap to the masses. When a major publication writes about something its not an expert in, one has to start to raise eyebrows and wonder... Its painfully obvious. They get away with it because they are huge brands and can rank for anything, so they are abusing their power.

> Sean Kaye says it best here: https://twitter.com/SeanDoesLife/status/1716935563075559630

What? No. The problem isn't the publishers - the problem is the search engine.

They built a facade. They _cannot_ manage getting relevant results from relevant sources where there is financial incentive to be ranked higher than someone else. It's patches and rules and filters and manual actions all the way up. They can say otherwise all they want and it's bull. They're just trying to get just good enough results for the vast majority of queries so they can keep selling ads - they lost the battle with SEO/spam a _long_ time ago.

You can't/shouldn't penalize the publishers for capitalizing on their "power". You call it an abuse of power - what are they abusing? What are the boundaries? Who set them? Again - expectations on your end, but where do they come from? If you're believing what you're reading at face value re: SEO and think everyone is "playing by the rules" you're in for a rude awakening. That "power" is given to them by Google and their algorithm(s) and search quality team. That "power" is _ultimately_ granted to them by their backlinks and nothing more - they're the billionaires of SEO. They wield the power granted to them by the search engines and they would be foolish not to capitalize on it.

On the other side - Google should have done something about all this years ago. But.. how?

> When a major publication writes about something its not an expert in, one has to start to raise eyebrows and wonder... Its painfully obvious.

You say they're not an expert - but who says you know what's what? And how do you even define what that topic is, let alone who the experts are? How do you assign "expert" status to a website in various topics (that also need to be defined)? Now, we need to do this for _every_ topic - it's not possible. Ok, so we'll choose the important topics and we'll manage who the "experts" are for those topics... This is what they've done. But even then, if you're trying to break into one of these protected topics as a non-behemoth - good luck.

Again, not the problem of the publishers that they can throw garbage content at valuable keywords and outrank the small players with much better content. That's Google's fault. It's _their_ job to determine those rankings and they're not good at it. Refer to my earlier statement about it all being a facade. They're just trying to be "ok" enough - there's no way to be _good_ at search-everything. Too much financial incentive to game the results - they'll never stay ahead of the curve without alienating too many "small" publishers in the process.

There's just no way that what they're saying publically is what they're actually doing or saying privately about how all this works. _Nobody_ is playing fair here.


> As someone who has spent years manipulating ranking I can tell you this has nothing to do with effecting ranking and is most likely about optics/human readability increasing CTR.

I do not agree. For example, CTR can be increased by modifying the design/text of a button. Or modifying the placement of the button, etc. CTR will not increase or decreased based on the structure of the URL. Hence the word CLICK in "CTR". Most of the time if the URL is listed somewhere, its truncated. Mobile phones trim it down to the domain name.

Plus it's just bad practice and will run into problems eventually. What happens when you have similar titles? Does this increase CTR or increase mistakes?

I still think its a shady practice and can't think of a single reputable major publication that would utilize that structure for Editorial. They should be penalized for a blatant attempt at manipulation. There is no other logical reason for it.

The verge: /features/23931789/seo-search-engine-optimization-experts-google-results.

> If you have data that shows urls like "/id/date/title-of-post" rank worse than "rootdomain/title-of-post" (which is nearly impossible to accurately measure due to the nature of how things are _really_ ranked) I'd argue that the rankings are related to the CTR rather than the URL structure.

Of course I don't have the data, but one has to assume they are doing it for one simple reason. Manipulation in search. It's not for a better user experience. How often are you typing in URLs manually?

> No judgement, but this seems like an odd stance to me. You seem to feel there is some sort of established standard in the structure of website pages/hierarchy, particularly one that should have punishments enforced against those who don't abide... Thankfully there is not, if there were then there would have to be some sort of agreement on these things - who is going to make those decisions? Who are those decisions going to be optimal for?

Generally speaking, yes URL taxonomy has best practices. I don't believe someone is going to create an about us page with /id/date/about-us and thinks that is a good idea, but anything is possible.


> Plus it's just bad practice and will run into problems eventually. What happens when you have similar titles? Does this increase CTR or increase mistakes?

In support of your point of "manipulation" - does it matter? They don't care about the actual content - they just need you to click so they get their ad views. It doesn't matter if there's more than one entry in the database with the same slug - or what content is even there.

> I still think its a shady practice and can't think of a single reputable major publication that would utilize that structure for Editorial. They should be penalized for a blatant attempt at manipulation. There is no other logical reason for it.

I agree that it's non-standard and that they're doing it for a reason not in the best interest of the internet as whole. But, shady? Eh - by the same logic (in my mind) you'd have to call the person who named their business AAA Lockpicking shady because they took advantage of a "standard" way that directories work to get their name above others.

> one has to assume they are doing it for one simple reason. Manipulation in search. It's not for a better user experience.

Ok, so every web service with a presence on search engines is manipulative and should be punished if they do anything that's not in the best interest of the user experience? (I understand this is pedantic, but from the perspective of the search engine - who draws the lines about what is and isn't acceptable, or seen as manipulation?)

I agree with what you're saying in theory, but I'm not sure I can get on board with penalizing any of these publishers for doing what is within their power to improve their position. Like... at some point, as public companies, you could argue that they're obligated to capitalize, no?

We deal with "manipulative" marketing all day, every day. We're drowning in real manipulation where massive corporations are employing people with education and experience to help them manipulate us as much as possible. I have a hard time putting "optimal" url structure in that bucket.

Google/MS/etc should, instead, draw some real lines and enforce their existing and extended policies in a consistent and transparent way. That's the solution here - not pitchforks for those who are taking advantage of what works.

> Generally speaking, yes URL taxonomy has best practices. I don't believe someone is going to create an about us page with /id/date/about-us and thinks that is a good idea, but anything is possible.

For what it's worth - in my testing/experience, dates and _very short_ 'category'/'topic' slugs improved rankings compared to /keyword-only. ie: /shoe-reviews/20231027/blue-shoes proved optimal over /blue-shoes. (Without the dates was equivalent to keyword-only.)

I share your frustration - I just don't see it from your perspective that the publishers should be punished. They're playing by the rules. The rules are terrible and that's not an accident. Google doesn't want specific guidelines that can be/are enforced - they don't want search to be a meritocracy, no matter what they say. They've had plenty of time to make it that and they've gone the complete opposite direction. It's not the publishers that are to blame for taking advantage of the tools and resources available to them to legally improve themselves.


To me this is fair criticism. I was thinking the same thing. However, the world does need a viable competitor to Nvidia in AI.

AI is not going anywhere. This is not a fad like some of the others mentioned but more likely than not where the next decade of innovation is built on.


a 2.37% click through rate is far from ideal. Twitter analytics specifically does not mention clicks on links for this very reason. instead they simply track all clicks... links, profile, hashtags, etc.


I think that is a fantastic click through rate for something that passively shows on a feed. That's probably about as high as you can realistically get to be honest. Most posts get sub fractions of 1% click through rate.


you are wrong, sub 1% is closer to what you would get for an advertisement.


For any random feed link that was not requested, just passively shows up to everyone that happens to follow Elon, it is a fantastic click through rate. Please show some data on passive feed click through rates on social media sites.


Given that Twitter just changed the algorithm to artificially inflate Musk's tweets, I think a tweet from him could well be considered a paid advertisement at this point. A very, very expensive paid ad.


44 billion dollars expensive?


That is what you would expect. An impression is merely having the tweet show up on your feed , even if you only see it briefly. Whether or not you read or click, etc is secondary.


Actually very surprising to me. I am well aware of the click rate of advertising. But for something like this, I would expect closer to 10%. Turns out most people just don’t read the content. And headline dictate how our society are being informed.


For what it's worth I would _kill_ for a 2.37% click through rate on my blogposts when they are linked on social media. Turns out most people on siloes like Twitter want to stay in the silo instead of actually reading things on people's own websites.


yeah, this is either a bitcoin like scenario that the initial boom is happening or we are a bunch of idiots that we thought this was a good idea. At this point I have no idea. The amount of money going around is eye opening


Any ETA on when chargeback protection will be available for Stripe.js?


No ETA just yet, but we are exploring ways to bring Chargeback Protection to other integration options!


So, .4% fee for $10,000 in sales amounts to $40. And losing one chargeback dispute costs you $15, plus time and effort to fill it in and submit it. Not to sound cynical, but odds are you will probably lose it anyways because the customer didn't feel like paying anymore.

Am I missing something? The chargeback protection fee doesn't sound terrible


Yes I do lose most disputes regardless of what the situation is, and I did forget about the $15 fee. I guess it really depends on the type of business, will probably make sense for some, but as a consumer I do see a lot of small businesses out there that should be looking first at how to get their chargeback rate low enough that they don’t need something like this in the first place.


I believe it's happened before where people have setup bot accounts to buy game licenses from indie developers, just to resell the keys on G2A/Kinguin. However they do it with stolen credit cards, so the indie developers get hit with a 15$ charge PER order. If there's 1,000 keys, that's 15,000$ in chargeback fees.

So it might be reverse, it might be a no brainer to accept this extra protection because like you said, 40$ fee @10,000 isn't that big of a deal, given only 3 chargeback fees are necessary to break even. Unless you can REALLY manage your chargebacks. If your product costs 20$, so 500 people buy it, you only need a .6% chargeback rate before you should've taken the extra protection.


And as a poster above says, he'd sign up for this for "revenue certainty".

Would you rather make $10,000-$20,000, or make $17,000? For some, they'd take the sure $17K over the possibility of making less and losing money after cost of goods sold are taken into account.


Are you assuming that the cost of the disputed transaction - the goods that were sent to the customer and supposedly not coming back - is $0?

For $10,000 of sales of one's music $40 does not sound terrible. For $10,000 of sales of high-end watches $40 to have the reimbursement is a helluva great deal.


I've been testing it for the past 30 minutes or so and found that it doesn't cause the same problems that InstantClick did. (Which was javascript errors that would randomly occur.) I'll limit it to a small subset of users to see if any errors are reported but there is a good chance this could go live for all logged in users. Maybe even all website visitors if all goes well.

Seems to have no impact on any javascript, including ads. Pages do load faster, and I can see the prefetch working.

Just make sure you apply the data-no-instant tag to your logout link, otherwise it'll logout on mouseover.


> Just make sure you apply the data-no-instant tag to your logout link, otherwise it'll logout on mouseover.

Logout links should never be GETs in the first place - they change states and should be POSTs.


POSTs are not Links. And Logout service is indempotent even if you can consider it changes the state of the system


Lots of people in this thread confusing “idempotent” with “safe” as specified in the HTTP RFC: https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html


FWIW RFC 2616 was obsoleted by the newer HTTP/1.1 RFCs: https://tools.ietf.org/html/rfc7231#section-4.2


Which still doesn't change GP's point though:

> In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe".

(there's an exception listed too, but doesn't apply to logout)

EDIT: I know of someone who made a backup of their wiki by simply doing a crawl - only to find out later that "delete this page" was implemented as links, and that the confirmation dialog only triggered if you had JS enabled. It was fun restoring the system.


I don't know why you think I'm contradicting them. I was just pointing out that there are newer RFCs. They also happen to have a stronger and more complete definition of safe methods.


Ok, so make it a form/button styled to look like a link.


Idempotency is not the issue, the issue is that a user might hover over the logout link, not click it, then move on to the rest of the site and find they are logged out for no reason.


Right, which is why the included library includes an HTML attribute to disable prefetch on a given link.


OP’s point was that logout should not be implemented with a link/GET but instead with a button/POST for exactly this reason.


A logout action is idempotent, though. You can't get logged out twice. In my opinion, that's the use case for a GET request.

I just checked NewRelic, Twilio, Stripe and GitHub. The first 3 logged out with a GET request and GitHub used a POST.


Idempotency has nothing to do with it. Deleting a resource is idempotent as well. You wouldn't do that via GET /delete

A GET request should never, ever change state. No buts.

Just because a bunch of well known sites use GET /logout to logout does not make it correct.

Doing anything else as demonstrated in this and other cases breaks web protocols, the right thing to do is:

GET /logout returns a page with a form button to logout POST /logout logs you out


Depends on your definition of “state.” A GET to a dynamic resource can build that resource (by e.g. scraping some website or something—you can think of this as effectively what a reverse-proxy like Varnish is doing), and then cache that built resource. That cache is “state” that you’re mutating. You might also mutate, say, request metrics tables, or server logs. So it’s fine for a GET to cause things to happen—to change internal state.

The requirement on GETs is that it must result in no changes to the observed representational state transferred to any user: for any pair of GET requests a user might make, there must be no change to the representation transferred by one GET as a side-effect of submitting the other GET first.

If you are building dynamic pages, for example, then you must maintain the illusion that the resource representation “always was” what the GET that built the resource retrieved. A GET to a resource shouldn’t leak, in the transferred representation, any of the internal state mutated by the GET (e.g. access metrics.)

So, by this measure, the old-school “hit counter” images that incremented on every GET were incorrect: the GET causes a side-effect observable upon another GET (of the same resource), such that the ordering of your GETs matters.

But it wouldn’t be wrong to have a hit-counter-image resource at /hits?asof=[timestamp] (where [timestamp] is e.g. provided by client-side JS) that builds a dynamic representation based upon the historical value of a hit counter at quantized time N, and also increments the “current” bucket’s value upon access.

The difference between the two, is that the resource /hits?asof=N would never be retrieved until N, so it’s transferred representation can be defined to have “always been” the current value of the hit counter at time N, and then cached. Ordering of such requests doesn’t matter a bit; each one has a “natural value” for it’s transferred representation, such that out-of-order gets are fine (as long as you’re building the response from historical metrics,


Don't be a wise ass, with that definition state changes all the time in memory registers even when no requests are made.

> So, by this measure, the old-school “hit counter” images that incremented on every GET were incorrect

Yes they are incorrect. No Buts.

Two requests hitting that resource at the same exact timestamp would increase the counter once if a cache was in front of it.


That brings me back to the year 2001, when my boss's browser history introduced Alexa to our admin page and they spidered a bunch of [delete] links. cough cough good thing it was only links from the main page to data, and not the actual data. I spent the next few days fixing several of problems that conspired to make that happen...


As in, anybody with a link to /delete could delete things? No identification/authentication/authorization needed?


> I spent the next few days fixing several of problems that conspired to make that happen...

Yes, I was a total n00b in 2001. But then, so was e-commerce.


and fwiw, I knew exactly how bad our security was... I kept my boss informed, but he had different priorities until Alexa "hacked" our mainpage :p


If you're not allowed to change state on GET requests, how do you implement timed session expiration in your api? You can't track user activity, in any way, on get requests, but still have to remember when he was last active.


Idempotence is for PUT requests. GET requests must not have side effects.


I've heard this "get requests shouldn't have side effects" argument before, but I don't think it works. At least, not for me, or I'm doing something wrong.

For example: Let's implement authentication, where a user logs in to your api and receives a session id to send along with every api call for authentication. The session should automatically be invalidated after x hours of inactivity.

How would you track that inactivity time, if you're not allowed to change state on get requests?


I think this is the argument for PUT instead of POST, not GET instead of POST.


You're confusing idempotency and side effects. A GET should not have any side effects, even if they are idempotent.


It's not about idempotency, but about side effects. The standards mention if it will cause side effects use POST. Logging out does cause side effect (you lose your login) and hence should be a POST.

In the old days it might have been acceptable to get away with a GET request but these days thanks to prefetching (like this very topic) it's frowned upon.

https://stackoverflow.com/questions/3521290/logout-get-or-po...


GET is also supposed to be “safe” in that it doesn’t change the resource which a logout would seem to violate.

The whole reason this is supposed to be the case is in order to enable such functionality as this instant thing.


Also: sometimes a site is misbehaving (for myself, or maybe for a user we're helping) and it's helpful to directly navigate to /logout just to know everyone is rowing in the same direction.

Using a POST, especially if you're building through a framework that automatically applies CSRF to your forms, forecloses this possibility (unless you maintain a separate secret GET-supporting logout endpoint, I guess).


When I originally started my community site I used GET for logout. However, users started trolling each other by posting links to log people out. It wasn't easy to control, because a user could post a link to a completely different site, which would then redirect to the logout link. So, I switched to POST with CSRF and never had another issue.


That's exactly the problem with idempotency.


actually, no. Idempotency means that you can safely do the same operation multiple times with the same effect as doing it once. That's a different issue than the no-side-effects rule which GET is supposed to follow.


Thanks, how did you find out about data-no-instant tag?



I echo the idea that he will try to fill the Matt Cutts role that mikkelewis mentioned in the comments. SEOs always knew to turn to Cutts when they had a question and it hasn't been properly filled since he left. I have a feeling we will be hearing from Mr. Sullivan quite a bit in the coming months. It has to be super exciting going from being an outsider (although well known and connected) to a Google insider. I wonder how others in the SEO community feel about it?


Lockup expiry is always a scary time. The same stuff happened to Facebook combined with mobile monitization and user growth fears made the stock tank to the 40s.

Though these are two very different companies. Best of luck to owners of Snap stock.


Awesome update for existing Visa Amazon Chase card holders. Their Store Card already offered 5% back, so it's nice that everybody gets the new rate now.

A bit weird that we have to verify the card to get the 5% back... you'd think the fact that I've been using it for years would be enough verification.

In any case, don't forget to verify!

Go to Your Account -> Manage Payment Options

Then type in your full credit card number and click on verify and rewards will change to 5%.


I just checked mine and it was already updated to 5%


Where do you think this free money comes from? Jpm covers it??


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: