Hacker Newsnew | comments | show | ask | jobs | submit | thangalin's commentslogin

http://homepage.cs.uiowa.edu/~jones/cards/history.html

"The use of punched cards in the Jacquard loom influenced Charles Babbage, who decided to use punched cards to control the sequence of computations in his proposed analytical engine. Unlike Hollerith's cards of 50 years later, which were handled in decks like playing cards, Babbage's punched cards were to be strung together like Jaquard's. Despite this and the fact that he never actually built an analytical engine, Babbage's proposed use of cards played a crucial role in later years, providing a precident that prevented Hollerith's company (and its successors) from claiming patent rights on the very idea of storing data on punched cards."

http://www.adbranch.com/how-ibm-helped-automate-the-nazi-dea...

http://www.scrapbookpages.com/AuschwitzScrapbook/History/Art...

"Auschwitz historians were originally convinced that there were no machines at Auschwitz, that all the prisoner documents were processed at a remote location, primarily because they could find no trace of the equipment in the area. They even speculated that the stamped forms from Auschwitz III were actually punched at the massive Hollerith service at Mauthausen concentration camp. Indeed, even the Farben Hollerith documents had been identified some time ago at Auschwitz, but were not understood as IBM printouts. That is, not until the Hollerith Büro itself was discovered. Archivists only found the Büro because it was listed in the I.G. Werk Auschwitz phone book on page 50. The phone extension was 4496. "I was looking for something else," recalls Auschwitz' Setkiewicz, "and there it was." Once the printouts were reexamined in the light of IBM punch card revelations, the connection became clear."

http://www.columbia.edu/cu/computinghistory/census-tabulator...

"And in this day and age of high resolution bitmapped displays with millions of colors, driven by the supercomputer-crushing performance of modern graphics hardware, your xterm window emulates an 80 column VT100 in order to provide some semblance of compatibility with 80 column Hollerith punched cards that date to the 1920s and were common on the IBM 1604."

http://www.quadibloc.com/comp/cardint.htm

Windows command prompts and xterm windows have a default width whose lineage traces back to 1801 (or earlier if the loom's history is considered).

It is painful to learn how our industry (software and computational hardware) can and have been abused for such unfathomably despicable, misguided purposes.

-----


Valproate, valproic acid, and sodium butyrate affect histone deacetylase, which might help rejuvenate brain plasticity. See also:

* http://onlinelibrary.wiley.com/doi/10.1111/j.1460-9568.2010....

* http://www.newscientist.com/article/dn24831-learning-drugs-r...

* http://en.wikipedia.org/wiki/Malleability_of_intelligence

-----


Valproate is used as an anti-epileptic and as a mood stabiliser. There are some risks of PCOS for women, but those risks have never been explained properly to me.

-----


The connection that immediately leaps to mind is the role of dopamine in inhibiting prolactin—if you have a slight hormone production imbalance on the thyroid/pituitary axis, decreasing dopamine production (and thus increasing prolactin) could exacerbate it.

-----


Valproate causes massive weight gain. It likely exacerbates any conditions (incl. PCOS) associated with obesity.

-----


It ought not ring true for anyone.

“Economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence.”

~ Martin Gilens and Benjamin Page, Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens; Perspectives on Politics, 2014

-----


A while back I documented the steps necessary to encrypt GMail using Thunderbird on Windows. Does anyone have a working list of instructions for encrypting e-mail that are significantly simpler?

http://davidjarvis.ca/encryption/

In other words, how can the number of steps to achieve desktop e-mail encryption be reduced to the point where the majority of users can accomplish this task autonomously--and what are those steps?

Happy New Year!

-----


Pull requests and suggestions for improvement are welcome.

https://bitbucket.org/djarvis/free-food/src/master/

Calculations for the renewable energy input are complex. The PVWatts Calculator[0] helped determine that a vast array (10,000 m^2) of solar panels at 46% efficiency could produce 5,182,905 kWh per year.[1] Whether that would be sufficient power to grow the plants (using lights casting 303 lumens per watt at 350 mA[2]) is unknown at this time.

[0]: http://pvwatts.nrel.gov/

[1]: https://bitbucket.org/djarvis/free-food/src/master/solar.md

[2]: http://www.cree.com/News-and-Events/Cree-News/Press-Releases...

-----


"if the cell phone had been password protected or otherwise ‘locked’ to users other than the appellant, it would not have been appropriate to take steps to open the cell phone and examine its contents without first obtaining a search warrant."

http://www.michaelgeist.ca/2013/02/fearon-decision/

http://www.canlii.org/en/on/onca/doc/2013/2013onca106/2013on...

http://www.cbc.ca/news/politics/cellphone-searches-upon-arre...

-----


By that logic, the police should be able to search your house or car if they're not locked.

The Canadian Supreme Court isn't known for it's intelligent decisions.

http://www.ctvnews.ca/w5/justice-system-scrutinized-woman-hi...

-----


The judges in the case in this article say differently:

"The majority also found that whether someone has protected their phone with a password doesn't carry much weight in assessing that person's expectation of privacy.

"An individual's decision not to password protect his or her cellphone does not indicate any sort of abandonment of the significant privacy interests one generally will have in the contents of the phone," Justice Thomas Cromwell wrote."

-----


I concur. There are several computer players on KGS that are rated between 1kyu and 5dan.

Put another way, at my peak I played 5 games against Kyoto's strongest amateur player (~7 dan), holding black. I won 1 and was handily smote (no contest) in the 4 others. After leaving Go for a few years, I've come back to be beaten by the 1kyu AI.

-----


The high kyu/dan level programs are worth playing against. The ones under 20 kyu aren't - they don't play the sort of game you can learn from, especially when you're getting started and don't know what things are supposed to look like.

-----


There's a deeper issue here that needs to be addressed. Consider the following review:

https://bitbucket.org/djarvis/world-politics/wiki/Editor

There are well over 30 WYSIWYG editors that (fail to) normalize the differences between browsers to provide a consistent text editing experience. Much of this duplicated effort would be better directed at reviewing, publishing, and updating a standard draft for editable, browser-based content.

https://blog.whatwg.org/the-road-to-html-5-contenteditable

Even though Squire isn't meant as a drop-in replacement for web page editing, it still suffers from the same endemic issue that plagues all editors: browser inconsistency.

-----


One of Quill’s goals is to be a browser consistent editor. It already produces identical HTML between browsers and everything runs through an automated test suite covering over a dozen platforms. I wouldn’t say it has succeeded in this regard yet (it’s still pre-1.0 software) but it’s at least a goal. Perhaps Quill will be the first to achieve this but I do think many other new editors are hot on the heels. The problematic ones are very old editors and too lightweight of editors (it’s not possible to solve all of contenteditable’s issues in ten lines of code). I’m not sure where Squire falls but it’s always interesting to see different approaches and use cases.

I hate to be nitpick, but since the linked editor list has ~10 facts per editor, and at least two of them are incorrect about Quill (Quill does not depend on jQuery and its license is BSD. Also unclear about the wrapping complaint), I have to say I’m not sure it’s a very reliable list.

Disclaimer: I am the author and maintainer of Quill.

-----


Fair enough. It takes a lot of time to: voluntarily find and download each editor, see how well (and easily) they integrate with the sample page, and perform the basic research. I readily admit that mistakes may have been made.

FWIW, here are two pictures of the same page:

- http://i.imgur.com/CThwDev.png (Quill)

- http://i.imgur.com/FXiVmBd.png (Aloha)

In addition to injecting superfluous markup, Quill also changes the font.

Incidentally, the LICENSE file does not explicitly state that it is BSD-based. I admit that I did not read (very far) below the fold on either the Quill homepage or its Github page, but went directly to the LICENSE file. (Again, mostly because I don't want to waste a lot of time hunting for the same information across 30+ projects.)

https://github.com/quilljs/quill/blob/develop/LICENSE

As far as I have discovered, no other such comprehensive comparison exists. I trust readers will verify the information for themselves and hope that it may prove useful.

-----


The sample page (http://djarvis.bitbucket.org/xml/support.xml) is in XML which is an input Quill and many other WYSIWYG editors do not support (nor claim to). Right now Chrome will not even render this page.

-----


Yet another unfortunate difference in browser implementations. Firefox can use relative paths in XSL includes, whereas Chrome cannot find files included using (for example):

    <xsl:include href="xsl/chart.xsl"/>
    <xsl:include href="xsl/tags.xsl"/>
Should work now. To clarify, the XML page is first transformed using the browser's internal XSLT engine. This produces a standard HTML DOM. The browser passes that HTML DOM to its HTML rendering pipeline. The JavaScript and CSS operate no differently on an XML-transformed-HTML page than they would a static HTML file.

-----


> Should work now.

So you just fixed the issue? What was the problem causing the mis-render? Thanks for starting the project I think the basic premise of using the browser for selection & apply transforms manually is superb idea, possibly the only one that can work consistently in the current messy environment!

-----


This works fine in Firefox, but not Chrome:

    <xsl:include href="xsl/chart.xsl"/>
    <xsl:include href="xsl/tags.xsl"/>
To work in both Firefox and Chrome, all the XSL files need to be moved into the same directory and referenced without a relative path:

    <xsl:include href="chart.xsl"/>
    <xsl:include href="tags.xsl"/>
I wouldn't recommend using client-side XSLT, though, for anything other than a quick proof-of-concept. There are technical differences that can create problems:

https://greenbytes.de/tech/tc/xslt/

The nice idea about client-side XSLT is that you can push the files to servers where you don't have server-side access, and still render the page. Once the XSLT is written, it's relatively easy to migrate to a server-side solution. Using a server-based XSL transformer then removes the headaches associated with client-side XSLT engine differences.

As an aside, here's an interesting XSL file:

https://bitbucket.org/djarvis/world-politics/src/master/xml/...

It transforms any simple XML document (i.e., attribute-free) into a similarly DIV-nested HTML document. The result is that all the pages in the following web site use a single transformation combined with corresponding CSS files:

http://djarvis.bitbucket.org/xml/

Most places I've worked that employ XSLT use a different XSLT file for each (differing) XML document.

-----


Looks like a standard 3-clause BSD license to me. It's short enough that you can read through it in less than a minute, and know whether you want to use it or not.

-----


And if I read all the licenses for all the software, there goes an hour that I never get back. It's not about me using the software--please see the link I posted above: I'm voluntarily documenting the current state of this calamitous cacophony of cross-browser editors. (Eventually I want to choose one for an app, but I want it to be an editor that works and meets the criteria in the aforementioned link.)

-----


If the LICENSE is important to you (and it is to me when I choose to build software on top of existing libraries) you should take the time to read it.

-----


> Even though Squire isn't meant as a drop-in replacement for web page editing, it still suffers from the same endemic issue that plagues all editors: browser inconsistency.

Squire's purpose is to make things look on-screen and in email the way the user expects it to look. The actual HTML generated is secondary to that. While its nice if it can produce the same HTML every time on every browser, its not especially important.

In terms of UI consistency, its intention is to provide a simple, pleasant editing environment that the user is mostly familiar with it (keyboard controls etc). Again, it doesn't have to be identical everywhere, as long as it does what the user expects.

(Note: I'm a FastMail employee, but not directly involved with Squire).

-----


> Squire's purpose is to make things look on-screen and in email the way the user expects it to look. The actual HTML generated is secondary to that.

How on earth does that work? Seems completely self-contradicting to me - it is exactly the generated HTML markup that needs to stay consistent in order for one to have a chance of normalizing the viewport through CSS?!

Would really like to hear one of the Squire maintainers input on this. Thanks for taking part in the discussion btw.

-----


The following 45 lines of XSL code transforms a simple XML document into XHTML DIV tags that mirror the input structure:

    <xsl:stylesheet version="1.0"
      xmlns:xsl="http://www.w3.org/1999/XSL/Transform">

    <xsl:output
      indent="no"
      method="html"
      doctype-system="about:legacy-compat"
      encoding="utf-8"/>

    <!-- Action parser that responds to HTTP requests. -->
    <xsl:param name="action" select="'do.dhtml'"/>

    <xsl:template match="/">
    <html>
      <head>
        <meta charset='utf-8'/>
        <title>Title</title>

        <link rel='stylesheet' type='text/css' href='css/common.css'/>
      </head>
      <body>
        <xsl:apply-templates/>
        <script type='text/javascript' src='js/common.js'></script>
      </body>
    </html>
    </xsl:template>

    <!-- Make the document complete with div elements and classes. -->
    <xsl:template match="*">
      <div class="{local-name()}"><xsl:apply-templates select="node()|@*"/></div>
    </xsl:template>

    <!-- The 'id' attribute indicates a link. -->
    <xsl:template match="*[@id]">
      <div class="{local-name()}"><a
        href="{$action}?action={local-name()}&amp;id={@id}"><xsl:apply-templates
          select="node()|*"/></a></div>
    </xsl:template>

    <!-- Retain the attributes (except if named "class"). -->
    <xsl:template match="@*">
      <xsl:if test="name() != 'class'"><xsl:copy-of select="."/></xsl:if>
    </xsl:template>

    </xsl:stylesheet>
CSS is then used to customize the presentation layer. For example, all the pages in the following website (click only "Next" and Safari is broken) are completely client-side XSLT, but use an XML document as a starting point:

http://djarvis.bitbucket.org/xml/

The pie chart on the following page was also generated using client-side XSLT:

http://djarvis.bitbucket.org/xml/resources.xml

The only "trick" that was required was to load CSS (and some JavaScript) based on the current filename in the URL:

http://djarvis.bitbucket.org/xml/js/common.js

Most people write one XSLT page per web page to transform, but by using structured DIVs with classes that correspond to the XML element names, only one general-purpose XSL page is required.

This demonstrates the separation of content (XML) from presentation (XHTML/CSS).

-----


Give me an input, the expected output, and I will show you I can do a more readable shorter and easier version of the transofrmation with jquery. The XSLT is so unreadable it requires comments.

I did once a less than 2k library in js that loaded the TTML markup in an hidden div and would show it on an HTML 5 video player with extra formatting.

Why would someone need XSLT in 2014? It was a failure in 2000.

-----


Relative sizes of planets and stars:

http://davidjarvis.ca/dave/gallery/star-sizes/

-----

More

Applications are open for YC Summer 2015

Guidelines | FAQ | Support | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: