Hacker Newsnew | comments | show | ask | jobs | submit | dfgonzalez's comments login

Nice initiative, please consider adopting IAB standard ad sizes like 300x250, 728x90, 468x60, etc so you can access to more remnant inventory.

Right now many ad companies use Greenpeace or Red Cross for serving something when they run out of ads. It shouldn't be hard to get some impressions from them, but the standard ad sizes will be a common requirement.

Good luck!

-----


I get the impression this is more geared towards sites that would run the Deck, Fusion, et al. In other words, sites that tend to have very limited advertising and don't subscribe to the IAB's definition of size standards.

-----


+1 to this. as much as i despise IAB sized advertising, it is the standard and you'll get more adoption.

-----


Thanks! Adopting more standard ad sizes as options is a great idea for next steps.

-----


You can, there are plenty of XML feed providers who will provide you an XML of ads relevant to the keywords you provide, which you collect server side and display to your users.

-----


Are you sure the Garden Hose has free access? I can't find any public access to it in the docs, everything directs me to their "data partners", which charge for this information.

-----


I'm assuming you and the OP are referring to the Streaming API? More info here: https://dev.twitter.com/docs/api/streaming

-----


Love these scrapper template generators. I wonder why you chose Java instead of something like PhantomJS to run the scrapper.

-----


um, the whole thing is language agnostic, no?

-----


The APIs generated are agnostic, but the tool to create them is java based.

-----


Well the reference generator is in java, but take a look at the github repo, there is nothing preventing adding an additional generator in the /generators directory.

All you really need to do is output the template.

-----


I don't think there's any value and the differences between BTC and LTC are not substantial (IMO).

But my guess is that being the second immediate P2P currency will give traction to LTC and some people who came late to the BTC party will jump in rising the value for a while.

-----


It is just different enough to be separately viable at the same time; the blockchains are incompatible down to the hash level. Second it rewards people with lots of hardware and not specialized hardware, which is where Bitcoin is going crazy right now, because a few players have got their hands on something that the rest of the network won't have for some weeks or months to come (ASIC).

ASIC miners for bitcoin are not viable for litecoin because the litecoin scrypt hashing process is memory dependent, not CPU-bound like bitcoin / SHA256. My information comes not from any technical expertise but from interpreting what I'm told by people who have done more research than me.

Maybe someone could explain better?

-----


The central premise for Litecoin using scrypt was that it was not able to be mined on GPUs, this was quickly proved wrong and the statement retracted. There's no technical advancement in LTC, it's just a weaker bitcoin clone.

-----


GPUs these days have alongside their massive collections of ALUs, large tracts of fast local memory. That's why they are viable for LiteCoin mining.

ASIC implementations for Bitcoin will be based on SHA256, built with not a lot of internal memory that can be accessed without traveling across another bus. Those implementations will not work with LiteCoin or scrypt, which is memory bound. GPUs have lots of memory.

It's no technical advancement over Bitcoin, I agree, but it is gaining acceptance (the prices have maintained parity with Bitcoin growth) and it seems that your current investment in LiteCoin mining will remain relevant for longer than old Bitcoin hardware, just because ASIC developers working on Bitcoin problem all see LiteCoin as just a weaker Bitcoin clone.

That's not an argument against LiteCoin.

-----


But I guess that when the BTC bubble will bust, LTC will do the same...

-----


Bought it a few years ago... and I wouldn't recommend it. It was created in a pre-iPad/iPhone era and it's not comfortable to use from anywhere else than a PC.

-----


I haven't used Fever personally, but I do know the popular Reeder RSS client for Mac and iOS has Fever support.

-----


The problem is that reading their website it seems like it does not support multiple users? If I am going to self-host, I'd rather do it in a way that can support my family and friends.

-----


(Currently) only the iPhone version of Reeder supports Fever. The iPad version does not. In version 2.0 the dev team might add support for Fever.

-----


That's true, but you'll have to recommend something as an alternative instead, since there is only a finite amount of RSS services; I say that as an RSS power user.

Beggars can't be choosers. :)

-----


I followed the link thinking of some kind of challenge, what can I actually ship for 300 USD? That idea actually triggered a lot of thinking in my mind. - What would I do with that money and just a few days.

Then saw that it was just a poorly redacted (3 lines) cheap job post.

-----


I don't like this, beyond the discussion if 3rd party cookies are good or bad, these measures are always for the worst.

Not long ago IE set DoNotTrack by default. What happened? Every single company that respected the user decision for DoNotTrack, stopped doing so since it wasn't the user, but a browser the one who decided that.

Long story short: All the effort done with DoNotTrack was wasted.

With this story, cookie tracking is far from perfection. It might be great for ad companies, might me useful for retailers and might be creepy for some users, but IMO is the safest way there's to date to keep the equilibrium. There are choices to be protected from cookie tracking and there's plenty of information.

-----


There's a critical difference here though. DNT was asking not to be tracked, not allowing the cookies is forcing it.

-----


Kind of old, but in it's time PlentyOfFish.com was 1-man startup. Source: http://en.wikipedia.org/wiki/PlentyofFish

I would love to get a list of 1-man-startup/companies and their experiences, but chances are that these guys are way too too busy to blog about it.

-----


So cool, a few questions:

1. The prices info, I assume is for US only, right? 2. Are you only analysing new products or also the used ones?

I created a similar (but very, very modest) proof of concept to track mercadolibre's prices (http://numok.com/products/view/samsung-t24a550/9), however it seems to be unusable without a human verifying each listing, as you state in your blog:

> This isn’t the highest price that we’ve recorded for a product though. Turns out this Samsung TV was priced at $1,000,000,000,000.00 ($1 trillion) in early November last year. A dozen sales of this would have gone a long way towards offsetting the American national debt!

3. Are you doing this validation in some way or unreal prices should be expected by using your API?

As stated before, great pricing! Although I'm not sure how does the limit of products work for the two initial account types (Up to 10,000).

Overall, I'm glad to have this API, thank you!

-----


1. Right now, we're focusing on the US. But we've made room for expanding internationally (the "currency" and "geo" fields are in place with this in mind).

2. We're analyzing used and refurbished products as well. Each offer is tagged with a "condition" field that conveys this.

3. The question of whether a price of a product is right or wrong is, we realized with time, subjective. Yes, $1,000,000,000,000.00 is very unlikely, but where does one drawn the line? Hence, we don't mark something as bad data and remove it from the database at the data layer. But we do handle this problem at the search layer - we internally rank products based on factors such as their (estimated) genuineness, popularity and so on. For the user, what this means is that when you query the API, only the most relevant products will be returned. The ranking system is constantly learning, so the vision is that it'll get better with time and data.

Thanks for your words about the pricing. Each API query returns upto 10 products; the free plan provides 1000 API queries a day. So you could retrieve upto 10000 products each day. Hope that clarifies. Glad you find the API useful - I'd love to know more about how you plan to use it!

-----

More

Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: