Hacker Newsnew | comments | show | ask | jobs | submit | login

But who are we to say where he should apply his abilities? He may be interested in something new and that is where his next best contributions lie.

-----


I know it's been said a million times already, but no way would I use this service after they shut down Reader.

Maybe they're just fishing to see how much marketshare they can take away from Evernote? If it's enough they'll keep it and improve it. But if it doesn't capture enough marketshare, I expect to see it culled in a few years as well.

-----


And if you use Firefox, give pentadactyl or vimperator a try.

-----


So do they have a policy of no instant messages or emails? I'd be pretty upset if I was forced to come into work only to continue to communicate with my coworkers through IM and email.

-----


Just yesterday I was able to write a one liner on the command line to analyze my skype usage. I wanted to see if it was making financial sense to keep skype alongside my prepaid wireless plan.

You can download your last six months of skype activity from skype.com and they are in the form:

Date;Date;Item;Destination;Type;Rate;Duration;Amount;Currency "July 31, 2012 21:16";"2012-07-31T21:16:01+00:00";"+11234567890";"USA";"Call";0.000;00:00:10;0.000;USD "July 31, 2012 21:15";"2012-07-31T21:15:38+00:00";"+11234567890";"USA";"Call";0.000;01:17:02;0.000;USD

After 15 minutes or so, I came up with the following one liner:

cut -f7 -d";" call_history* | grep -v "Duration" | awk '{ FS=":"; s+=$1*60; s+=$2; if ($3 != 00) { s+=1 } } END {print s " minutes"}'

I could have done the same thing in perl or python in 5 minutes, but it was interesting to "program" only by hooking programs together to achieve the same thing.

After analyzing my skype logs, I found I used 1800 minutes. That would have cost me $180 with prepaid minutes, but only cost $30 with skype.

-----


Another way would be all awk.

    awk -F\; '
        $7 != "Duration" {
            split($7, t, ":")
            s += t[1] * 60 + t[2] + (t[3] != 0)
        }
        END {print s + 0}
    '
Note the handling of s == "" in END.

-----


The sed and dc combo are perhaps not as readable, but there was a time before the dawn of the One True Awk. :-)

    sed '1s/.*/0/; s/;[^;]*$//; s///; s/.*;//
        s/00$//; s/:..$/+1++/; s/:$/++/; s/:/ 60*/; $s/$/p/' |
    dc

-----


I was seriously coming here to say the exact same thing. Right now I am a web developer, but I've been dreaming about starting a medium sized aquaponics farm. I'm currently building a system that will use my master bath as a fish tank to learn the ropes.

-----


Ha! That's awesome! It's seriously the future of food security for urban areas. I'm dreaming up low-rise & high-rise apartment developments where there's dedicated floors of aquaponics, securing food for those people in the building. Rooftop gardens will have chickens and goats. Bet we could develop an awesome app systemising the process too. Good times!

-----


Millions of acres are under cultivation, to feed America. It seems impractical to get that from some rooftops, even under intense cultivation.

-----


I'm not American, so am more influenced by what I've seen in Australia and my travels around South-East Asia. There are many urban areas that are not surrounded by arable land, or the arable land they have is dedicated to growing cash crops for export. Globally there's a massive movement towards locally-grown, organically-sourced, low-mileage food, so aquaponics is one solution that taps that.

-----


Its optimistic to call the effort 'massive'. Current agricultural practices are massive. The movement toward local produce is 'boutique' maybe. In fact the whole organic food deal is a speck on the agribusiness landscape.

-----


A hybrized food system -- say with local herbs, tomatoes, lettuces etc-- however does not seem all that impractical?

-----


Clearly, it depends upon the height of the building. A two-story with 4 familes = 20 people - that is a stretch. A 20-storey with 40 families- the roof is no bigger.

-----


Agree, this is a good point... you would need dedicated spaces or what not, if not re-purposing unused. Depending on the economics of sq ft and energy needs, it may or not be good math.

-----


You must not have much experience being around teenage boys. An improved test for pancreatic cancer is not something normally coming forth from that demographic.

-----


Unfortunately, pentadactyl doesn't seem to work with Firefox 15, so I guess I'll have to stay at 14. I tried both 1.0rc1 and the latest nightly and neither work.

-----


I just switched back to vimperator for the time being. It's not so terrible.

-----


Why would scraping their site be illegal?

-----


Scraping itself isn't always illegal. But scraping a site and then republishing the content elsewhere often is, unless you have permission.

-----


I figured displaying their content in full is illegal.

-----


Ah, I hadn't clicked any of the links. I thought you had just linked back to their site.

By the way, I like your site. It makes it easy to cut through the cruft that always appears in reddit threads. Do you just grab all responses by the poster and the parent comment?

-----


If they have a full content RSS feed, then this use should be okay. But I hope it survives their scrutiny since this is vastly more readable then the native view.

-----


It's an interesting debate, surely the content is owned by the writers?

-----


A couple months ago, reddit started refusing requests from my web scraper. Figured out they started checking the user agent and refusing connections that didn't look like they came from a user's browser. Unless I missed an announcement somewhere, it doesn't seem like they're overly friendly about allowing web scrapers.

-----


They just don't want to be abused.

"We're happy to have API clients, crawlers, scrapers, and Greasemonkey scripts, but they have to obey some rules:"

https://github.com/reddit/reddit/wiki/API

-----


They are fine with it as long as you abide to their terms, they have a subreddit dedicated to reddit development and the reddit api which has discussion of scraping: http://www.reddit.com/r/redditdev

-----

More

Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: