Hacker Newsnew | comments | show | ask | jobs | submit | premchai21's commentslogin

You might be referring to Greenling (http://www.greenling.com/), though I don't recall whether they're the only one.

-----


Among other things, MyModule.camelize doesn't provide an extension point for polymorphism. Consider a serialization library in which different classes should be able to have separate freeze/thaw behavior. It's possible to do this in current-day Ruby without easy collisions, but it tends to involve cumbersome manual prefixing or emulating method dispatch yourself.

Methods being namespaced in packages separately from the class hierarchy is something CLOS has that I miss in almost all the more Smalltalk-y languages.

-----


MyModule.camelize doesn't support polymorphism (you could even call it procedural programming in a OOP shell), but something like Camelizable.new(my_string).camelize does. I'm still convinced that what Ruby needs is more object composition (and maybe a way to override default literal construction), and that refinements are a cannon to shoot a mosquito.

-----


Camelizable.new(my_string).camelize does not support polymorphism, because it does not depend on the type of my_string.

Which means if I want to add camelizable to my AnnotatedString I have to monkey patch Camelizable#initialize.

-----


No, it means that the implementation of Camelizable#camelize needs to be polymorphic w.r.t. my_string. Since Ruby is duck-typed, presumably the implementation of "camelize" would first check to see if the receiver responded to "gsub", then do the necessary substitutions. (Actually, a better implementation would be to first check if my_string responds to "camelize" and call that directly if it does.) In the end, this is a classic decorator pattern.

-----


if you go the way of a decorator pattern, say, I write my `IntegerCamelDecorator` since your `Camelizable.new` does not support numbers, then what is the point of `Camelizable` to start with? You can just have a mirror hierarchy of `CamelAdapters`.

Of course, you can implement `Camelizable.new` so that it performs a dynamic dispatch itself, by looking up in a `CamelizableRegistry`. You can even have this built up magically with reflection.

You can, of course do everything, but a method call in ruby still only dispatches on self, and if that is fixed the method cannot be polymorphic, if you think otherwise we can agree to disagree.

-----


You're focusing on the #new call. The actual #camelize call is still dispatching based on the Camelizable object which is parameterized with my_string. Ok, so it's not true call-site polymorphism...fine. I'd argue that's an unimportant implementation detail, but if you prefer (and what I frequently do in my own code) you could have something more like:

    Camelizable(my_string).camelize
where now Camelizable does the dynamic class lookup to choose the correct decorator for my_string. This turns #camelize into a truly call-site polymorphic call and, at least in my opinion, is far more readable/reasonable than either monkeypatching or refinements.

-----


fork() only leaves the one thread running in the child, and at that point the fd tables are no longer shared, so trying to detect and close unwanted descriptors in the child after fork is not racy by itself as a way of mitigating the possibility of uncontrollable non-CLOEXEC opens elsewhere in the process (though this doesn't preclude it being a bad idea for other reasons).

-----


true, sorry.

-----


Darcs's send and apply work on this principle, of course, though I haven't seen it applied to XMPP specifically. As I recall, the manual even has an example of how to add an easy hook in Mutt to apply a patch from email. git diff and git apply are available for single patches, of course, and git format-patch and git am are optimized for the mail case so that multiple commits can be sent at once, though darcs apply has integrated PGP signature checking and at a glance I don't see anything similar in the git versions. git-send-pack and git-receive-pack also have an underlying role not strongly coupled to the overlay transport, but seem to be designed for bidirectional communication so that the repositories can negotiate about which objects they already have.

-----


Git 1.7.9 added the ability to sign commits. Previously you could only sign tags:

  * "git commit" learned "-S" to GPG-sign the commit; this can be shown
   with the "--show-signature" option to "git log".
man git:

  All objects are named by the SHA1 hash of their contents, normally 
  written as a string of 40 hex digits. Such names are globally unique. 
  The entire history leading up to a commit can be vouched for by signing
  just that commit."

-----


What would be really nifty would be the ability to add multiple signatures to commits.

Take a commit with signature[s], add a new signature (create a new commit that contains the previous signature[s] (which should still be valid if you don't change anything else) + your own).

I looked into this very briefly and think it should be possible, but didn't go any further than that.

This would I think be neat for creating things like code review systems that require a certain number of signatures from a larger set of potential signers before automatic deployment.

-----


There's signed commits, yes. Is there also "apply these but only if they have a valid signature from one of these people"? That's what I didn't see.

-----


If you change it so that what feels "natural" to you is not what feels "natural" to everyone else, then you are in for a world of hurt when you find that all the resources you were relying on are no longer presented in a way that you're compatible enough to access.

-----


As it happens, I'm currently in a slow period and looking around for side projects that might lead to money. The other posters have some reasonable general ideas, but if you're willing to provide more detail, I can see whether it's the sort of thing that I'd be able to either help with or find someone else for—right now I can't determine enough from your post to say anything useful that hasn't already been said. You can send me email at my HN username at mailforce.net, if you wish. :-)

-----


I have a large chunk of friends and acquaintances who now conduct their ongoing visible conversations with each other almost entirely on Twitter, many of whom have started becoming impossible to reach by other means.

They also have visible half-conversations with a similar number of other people whose tweets are “protected” so that only “confirmed followers” can read them. I don't see OStatus doing anything with the latter, and I don't think those people are going to move to public streams, and if they don't move, the other people interspersed with them won't move because it'll become impossible to talk to them. This is more or less the same reason I still reluctantly keep a LiveJournal: the open-Web facilities for “but only show it to these people” are severely lacking (and I haven't found a good way of doing anything about this yet).

There's also the issue of social networks including things that are de facto currency-like: “number of followers” on Twitter is an obvious one. In a distributed network these usually can be faked, or at least are that way on the UI side since it's hard to generate a UI for that that doesn't drive the user's security-related cognitive load through the roof. (Or reveal more information about subscriptions than people prefer, but that's a shakier reason since some existing networks already reveal that graph.)

Is there any push for OStatus or other distributed social network approaches to handle these use cases? I haven't been able to find any, and OStatus seems to think the restricted stream case is explicitly out of scope.

-----


I'm not commenting on whether OStatus will ever have these features. I'd like to just point out that what you're describing is absolutely disturbing. I'm talking about the part where you said that these people cannot be reached the other way and conduct all of their conversations on Twitter.

-----


Well, let me clarify just in case: the “impossible” is an exaggeration, since they will eventually respond to things like email, but it's not enough to keep up. Many of the socially-important multicast messages only occur on Twitter. I've been meaning to subscribe to them with a local aggregator, since for those whose tweets are publicly visible that should be sufficient, but while my existing RSS links still work, I can't find any way to acquire new ones, so right now I'm reduced to manual polling.

-----


Have you tried: http://api.twitter.com/1/statuses/user_timeline.rss?screen_n...

from http://thenextweb.com/twitter/2011/06/23/how-to-find-the-rss...

-----


I had not! That appears to work for now, though I'm not convinced it will continue working if Twitter continues going the way they are. For some reason that didn't appear in my search; thank you.

-----


Has there been discussions about handling private data in Private content? Yes - and there have been efforts in making that possible in OStatus, but since OStatus itself is just made up of other specifications it currently awaits those other specifications to get extended in a way that supports this - not sure what the current status really is.

I personally though would prefer to have the main use case - the one with just public data - work first and learn from that and get that rolling before moving to the more complex stuff.

-----


Sure, but you have to be careful of the gradual lockdown effect as more people get involved. The handling of non-public information is a cross-cutting concern. If everyone builds their software and security models around the assumption that all posts are visible to everyone (because it's the common case, and they decided, just as you're saying, that supporting the common case was the most important thing), then going back later after everyone's gotten attached to the software and trying to add private multicast without any leaks can be a nightmare. No one will be able to use it because their friends won't be able to use it unless every piece of software in between makes it work.

I'm tempted to compare to how deploying new transport protocols over IP is nearly impossible for consumer clients now, because everyone's built NATs that assume TCP and UDP, because those were the common cases and therefore the important ones and now anything else is instantly hosed. It's a bit of a bad example, though, because in the case of transports there are other reasons as well.

-----


> This is more or less the same reason I still reluctantly keep a LiveJournal: the open-Web facilities for “but only show it to these people” are severely lacking (and I haven't found a good way of doing anything about this yet).

Google Plus seems to have achieved this fairly well. I haven't had the time to take a look at their API yet, but I can't imagine it's any less than what LJ provided.

-----


Does that really qualify as open-Web facilities, though? Is it “show it to these people”, or is it “show it to these Google Plus users”? The latter is not appreciably better than the LiveJournal case, for me, and in fact this provides a demonstration of the lock-in effect.

Here's another one: Dreamwidth runs an LJ-derived codebase, arguably an improved one (they had considerably better separation of “subscribe” and “authorize” last I checked, rather than a “friends list” that conflates these), and some of the people I contact on LJ have moved there, but they all have continuous crossposts back to the original LiveJournal, and if I moved there I think either no one would read anything I wrote, or else the comment streams would be so disjoined that I would be effectively a strange-looking LJ user anyway.

That last is also a concrete example of why nonexclusivity is not a complete solution. The resource that's being fought over is not where one can read but where a bunch of other people do read. If everyone views your content at Phuubaar's House of Crossposts, then if Phuubaar cuts you off, you are still hosed in the general case even if you provide the same stream somewhere else, because those users are not going to know about it or are going to find it too inconvenient to subscribe.

And the tooling around Atom and RSS aggregation all seems to be built around the idea that feeds are almost always public. I haven't had any success with the idea of creating a private Atom feed and expecting any of my friends to be able to read it. Either it'll require authentication, at which point the software usually won't be able to access it, or I can try to use a capability-URL style, at which point one of them will punch it into their favorite everyone-shares-everything social aggregator (Google Reader?) and then my (illusion of) confidentiality is gone.

This is terrible, and I don't know how to fix it.

-----


I actually use macros along the lines of:

  #define LOAD(TYP, ptr) (*((TYP *)memcpy((TYP[1]){ }, ptr, sizeof(TYP))))
  #define REINTERPRET(AS_TYP, FROM_TYP, val) LOAD(AS_TYP, (FROM_TYP[1]){ val })
In practice (that I've found, with GCC), the memcpy and single-use temporaries get optimized away entirely. In strict C99, writing one member of a union and then reading a different member of the same union is undefined in the general case, last I checked. http://cellperformance.beyond3d.com/articles/2006/06/underst... seems to agree, but suggests that every major compiler recognizes it as a de facto idiom and supports it anyway.

-----


Yep, type punning through a union is nonstandard and unportable, but if I'm not mistaken, it is a documented feature in GCC.

-----


Type punning through a union is explicitly defined to work in C99 and C11 (footnote 95 in C11):

> If the member used to read the contents of a union object is not the same as the member last used to store a value in the object, the appropriate part of the object representation of the value is reinterpreted as an object representation in the new type as described in 6.2.6 (a process sometimes called ‘‘type punning’’).

Please don't help spread the myth that compiler writers can break this idiom. That said, I use memcpy in my own code.

-----


All right, I see it in C99 as well now (§6.5.2.3 footnote 82, if I'm not mistaken). Thanks for the correction.

-----


Oh, well I stand corrected.

-----


But, last I checked, you could PRAGMA foreign_keys=off temporarily, which would also inhibit the automatic handling of foreign key constraints in other tables when the target table is renamed or dropped. Then you can create a new table, populate it, drop the old one, rename the new one to the old name, and turn foreign_keys back on, and the constraints will now target the new table.

-----


Does that actually work for dropping and renaming tables? I thought I tried that.

-----


In any event you can get the foreign keys from sqlite_master, delete them during the reorganisation and then put them back afterwards.

-----


It does in 3.7.13 (and presumably older versions, but I don't have a solid record of this immediately available).

-----


Resuming fetching multiple unfetched fragments of a resource is already possible with HTTP range requests. Checking the entire file can be done if the server provides a Content-MD5 header (though I'm not sure actual client use of this is widespread), and checking parts of it is infrequently useful by itself to the processing application and would be highly application-dependent even if so. The big win of BitTorrent is the distribution of available channel capacity over the swarm of downloaders; if you're not going to use that part, HTTP does quite well by itself.

-----

More

Applications are open for YC Summer 2015

Guidelines | FAQ | Support | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: